Enable Gzip Compression in aiohttp Server
Enabling Gzip compression in your aiohttp server can significantly improve your application’s performance by reducing the size of HTTP responses.
In this tutorial, you’ll learn how to implement Gzip compression, configure it, and understand its impact on your server’s performance.
You’ll learn how to use built-in libraries and create custom middleware to achieve optimal compression results.
Implement Gzip Compression Middleware
Using aiohttp_compress
To enable Gzip compression in your aiohttp server, use the aiohttp_compress library.
First, install it using pip:
pip install aiohttp_compress
Now, integrate it into your aiohttp server setup:
from aiohttp import web from aiohttp_compress import compress_middleware async def handle(request): return web.Response(text="Welcome to our compressed aiohttp server!") @web.middleware async def compression_middleware(request, handler): return await compress_middleware(request, handler) app = web.Application(middlewares=[compression_middleware]) app.router.add_get('/', handle) web.run_app(app)
Output:
======== Running on http://0.0.0.0:8080 ======== (Press CTRL+C to quit)
The server is now running with Gzip compression enabled.
When a client that supports Gzip compression makes a request, the response will be automatically compressed.
Custom middleware
For more control over the compression process, you can create custom middleware:
from aiohttp import web import gzip @web.middleware async def gzip_middleware(request, handler): response = await handler(request) if 'gzip' in request.headers.get('Accept-Encoding', '').lower(): original_body = response.body compressed_body = gzip.compress(original_body) response.body = compressed_body response.headers['Content-Encoding'] = 'gzip' response.headers['Content-Length'] = str(len(compressed_body)) return response async def handle(request): return web.Response(text="This response may be compressed!") app = web.Application(middlewares=[gzip_middleware]) app.router.add_get('/', handle) web.run_app(app)
Output:
======== Running on http://0.0.0.0:8080 ======== (Press CTRL+C to quit)
This custom middleware checks if the client supports Gzip compression and applies it when appropriate.
Compression Configuration
Minimum size thresholds
You can set a minimum size threshold to avoid compressing small responses:
MIN_SIZE = 1000 # bytes @web.middleware async def gzip_middleware(request, handler): response = await handler(request) if 'gzip' in request.headers.get('Accept-Encoding', '').lower() and len(response.body) > MIN_SIZE: compressed_body = gzip.compress(response.body) response.body = compressed_body response.headers['Content-Encoding'] = 'gzip' response.headers['Content-Length'] = str(len(compressed_body)) return response
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8
Responses smaller than 1000 bytes will not be compressed, saving processing time for small payloads.
Compression levels
You can adjust the compression level to balance between speed and compression ratio:
compressed_body = gzip.compress(response.body, compresslevel=6)
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Encoding: gzip Content-Length: 32
A compression level of 6 provides a good balance between compression ratio and CPU usage.
Content type filtering
You can compress only specific content types by checking the response content type:
COMPRESSIBLE_TYPES = ['text/plain', 'application/json', 'text/html'] if response.content_type in COMPRESSIBLE_TYPES: compressed_body = gzip.compress(response.body)
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Encoding: gzip Content-Length: 32
Only responses with content types in the COMPRESSIBLE_TYPES list will be compressed.
Exclude specific routes or responses from compression
You can exclude certain routes from compression by checking the request path:
EXCLUDED_ROUTES = ['/api/large-file', '/static/images'] if request.path not in EXCLUDED_ROUTES: compressed_body = gzip.compress(response.body)
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Length: 32
Responses from excluded routes will not be compressed.
Compression Ratio Tracking
Track compression ratios to evaluate the effectiveness of your compression:
@web.middleware async def gzip_middleware(request, handler): response = await handler(request) if 'gzip' in request.headers.get('Accept-Encoding', '').lower(): original_body = response.body original_size = len(original_body) if original_size > 100: # Example threshold compressed_body = gzip.compress(original_body, compresslevel=2) compressed_size = len(compressed_body) # Update the response with the compressed body response.body = compressed_body response.headers['Content-Encoding'] = 'gzip' response.headers['Content-Length'] = str(compressed_size) compression_ratio = (original_size - compressed_size) / original_size * 100 print(f"Compression ratio: {compression_ratio:.2f}%") else: print("Response too small to compress effectively.") return response async def handle(request): large_text = "This response may be compressed! " * 50 return web.Response(text=large_text)
Output:
Compression ratio: 95.70%
This output shows that the compression reduced the response size by 95.70%.
If you want to test compression on smaller responses, you can reduce the threshold.
However, keep in mind that compressing very small responses might not be efficient.
Response Time Impact Analysis
Analyze the impact of compression on response times:
@web.middleware async def gzip_middleware(request, handler): response = await handler(request) if 'gzip' in request.headers.get('Accept-Encoding', '').lower(): original_body = response.body original_size = len(original_body) if original_size > 100: import time start_time = time.time() compressed_body = gzip.compress(original_body, compresslevel=2) compressed_size = len(compressed_body) end_time = time.time() # Update the response with the compressed body response.body = compressed_body compression_time = (end_time - start_time) * 1000 print(f"Compression time: {compression_time:.2f} ms") response.headers['Content-Encoding'] = 'gzip' response.headers['Content-Length'] = str(compressed_size) compression_ratio = (original_size - compressed_size) / original_size * 100 print(f"Compression ratio: {compression_ratio:.2f}%") else: print("Response too small to compress effectively.") return response async def handle(request): large_text = "This response may be compressed! " * 100000 return web.Response(text=large_text)
Output:
Compression time: 7.22 ms
The compression process took 7.22 milliseconds, which is negligible for most applications.
Note that we increased the response by 100000 to show these milliseconds otherwise, it will show 0 milliseconds.
Dynamic Compression Decisions
Content-based compression choices
You can make compression decisions based on content characteristics:
def should_compress(content): return len(content) > 1000 or content.count('\n') > 10 if should_compress(response.text): compressed_body = gzip.compress(response.body)
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Encoding: gzip Content-Length: 74
This approach allows for more nuanced decisions on when to apply compression based on content properties.
Client capability-based compression
You can check client capabilities before deciding to compress:
accept_encoding = request.headers.get('Accept-Encoding', '').lower() if 'gzip' in accept_encoding: compressed_body = gzip.compress(response.body) elif 'deflate' in accept_encoding: compressed_body = zlib.compress(response.body) else: compressed_body = response.body
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Encoding: gzip Content-Length: 74
This code adapts the compression method based on the client’s capabilities.
BREACH and CRIME Attack Mitigation
You can mitigate BREACH and CRIME attacks by avoiding the compression of sensitive data:
def contains_sensitive_data(content): return 'password' in content or 'token' in content if not contains_sensitive_data(response.text): compressed_body = gzip.compress(response.body) else: compressed_body = response.body
Output:
HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Length: 35
By not compressing responses containing sensitive data, you reduce the risk of these compression-based attacks.
Mokhtar is the founder of LikeGeeks.com. He is a seasoned technologist and accomplished author, with expertise in Linux system administration and Python development. Since 2010, Mokhtar has built an impressive career, transitioning from system administration to Python development in 2015. His work spans large corporations to freelance clients around the globe. Alongside his technical work, Mokhtar has authored some insightful books in his field. Known for his innovative solutions, meticulous attention to detail, and high-quality work, Mokhtar continually seeks new challenges within the dynamic field of technology.