aiohttp vs. tornado: Performance Benchmarks and Comparisons

In this tutorial, you’ll explore the differences between aiohttp and Tornado, two popular Python frameworks for building asynchronous web applications.

You’ll learn about their key features, strengths, and how to implement common web development tasks in each framework.

In general, these are the major comparison points between them:

Feature aiohttp Tornado
Primary Focus Asynchronous HTTP client/server Full-stack web framework
Asynchronous Model asyncio-based Custom event loop
Performance Generally faster for I/O-bound tasks Good performance, especially for long-polling
HTTP Client Built-in, powerful async client Basic async client included
Caching use third-party libraries built-in caching
Template Engine No built-in engine (can use Jinja2) Built-in templating engine

 

 

WebSocket Benchmark

Both frameworks support WebSockets, so you could benchmark how they handle a large number of concurrent WebSocket connections:

import asyncio
import time
import aiohttp
from aiohttp import web
import tornado.ioloop
import tornado.web
import tornado.websocket
import websockets

# Shared configuration
NUM_CONNECTIONS = 1000
MESSAGES_PER_CONNECTION = 100
PORT = 8000
MAX_RETRIES = 3

# aiohttp WebSocket server
async def aiohttp_handler(request):
  ws = web.WebSocketResponse()
  await ws.prepare(request)
  try:
    async for msg in ws:
      if msg.type == aiohttp.WSMsgType.TEXT:
        await ws.send_str(msg.data)
  except ConnectionResetError:
    pass
  return ws
async def run_aiohttp_server():
  app = web.Application()
  app.router.add_get('/ws', aiohttp_handler)
  runner = web.AppRunner(app)
  await runner.setup()
  site = web.TCPSite(runner, 'localhost', PORT)
  await site.start()
  print("aiohttp server started")

# Tornado WebSocket server
class TornadoHandler(tornado.websocket.WebSocketHandler):
  def on_message(self, message):
    self.write_message(message)
def run_tornado_server():
  app = tornado.web.Application([
      (r"/ws", TornadoHandler),
    ])
  app.listen(PORT)
  print("Tornado server started")
  tornado.ioloop.IOLoop.current().start()

# WebSocket client for benchmarking
async def websocket_client(url, semaphore):
  for _ in range(MAX_RETRIES):
    try:
      async with semaphore:
        async with websockets.connect(url) as websocket:
          for _ in range(MESSAGES_PER_CONNECTION):
            await websocket.send("Hello")
            await websocket.recv()
      return  # Successful completion
    except (websockets.exceptions.ConnectionClosed, ConnectionResetError):
      await asyncio.sleep(0.1)  # Wait before retrying
  print(f"Failed to complete connection to {url} after {MAX_RETRIES} attempts")

# Benchmark function
async def run_benchmark(server_func, server_name):
  # Start server
  if asyncio.iscoroutinefunction(server_func):
    await server_func()
  else:
    asyncio.get_event_loop().run_in_executor(None, server_func)
  await asyncio.sleep(1)  # Wait for server to start

  # Run benchmark
  start_time = time.time()
  semaphore = asyncio.Semaphore(100)  # Limit concurrent connections
  tasks = [websocket_client(f"ws://localhost:{PORT}/ws", semaphore) for _ in range(NUM_CONNECTIONS)]
  await asyncio.gather(*tasks)
  end_time = time.time()
  total_messages = NUM_CONNECTIONS * MESSAGES_PER_CONNECTION
  duration = end_time - start_time
  messages_per_second = total_messages / duration
  print(f"{server_name} WebSocket Benchmark Results:")
  print(f"Time taken: {duration:.2f} seconds")
  print(f"Total messages: {total_messages}")
  print(f"Messages per second: {messages_per_second:.2f}")
  print("-----------------------------")
async def main():
  # Run aiohttp benchmark
  await run_benchmark(run_aiohttp_server, "aiohttp")

  await asyncio.sleep(2)  # Wait between benchmarks

  # Run Tornado benchmark
  await run_benchmark(run_tornado_server, "Tornado")
if __name__ == "__main__":
  asyncio.run(main())

Output:

aiohttp server started
aiohttp WebSocket Benchmark Results:
Time taken: 122.97 seconds
Total messages: 100000
Messages per second: 813.21
-----------------------------
Tornado server started
Tornado WebSocket Benchmark Results:
Time taken: 148.94 seconds
Total messages: 100000
Messages per second: 671.42
-----------------------------
  1. We define WebSocket servers for both aiohttp and Tornado.
  2. We create a WebSocket client function that connects to the server and sends a specified number of messages.
  3. The run_benchmark function starts the server, creates multiple WebSocket connections, sends messages, and measures the time taken.

aiohttp processed about 813 messages per second, while Tornado processed about 671 messages per second. This means aiohttp had about 21% higher in the WebSockets benchmark.

 

Handling JSON data

Let’s measure the time taken to serialize (encode) and deserialize (decode) JSON data, as well as the time to send and receive JSON payloads over HTTP.

import asyncio
import aiohttp
from aiohttp import web
import tornado.ioloop
import tornado.web
import json
import time
import random
NUM_REQUESTS = 10000
PORT = 8000
JSON_SIZE = 1000  # Number of key-value pairs in the JSON payload

# Generate a large JSON payload
def generate_large_json():
  return {f"key_{i}": random.random() for i in range(JSON_SIZE)}
LARGE_JSON = generate_large_json()

# aiohttp server
async def aiohttp_handler(request):
  data = await request.json()
  return web.json_response(data)
async def run_aiohttp_server():
  app = web.Application()
  app.router.add_post('/', aiohttp_handler)
  runner = web.AppRunner(app)
  await runner.setup()
  site = web.TCPSite(runner, 'localhost', PORT)
  await site.start()
  print("aiohttp server started")

# Tornado server
class TornadoHandler(tornado.web.RequestHandler):
  def post(self):
    data = json.loads(self.request.body)
    self.write(json.dumps(data))

def run_tornado_server():
  app = tornado.web.Application([
      (r"/", TornadoHandler),
    ])
  app.listen(PORT)
  print("Tornado server started")
  tornado.ioloop.IOLoop.current().start()

# Benchmark functions
async def json_benchmark(client_session, url):
  start_time = time.time()
  for _ in range(NUM_REQUESTS):
    async with client_session.post(url, json=LARGE_JSON) as response:
      await response.json()
  end_time = time.time()
  return end_time - start_time

def measure_json_ops():
  # Measure JSON serialization
  start_time = time.time()
  for _ in range(NUM_REQUESTS):
    json.dumps(LARGE_JSON)
  serialize_time = time.time() - start_time
  json_string = json.dumps(LARGE_JSON)

  # Measure JSON deserialization
  start_time = time.time()
  for _ in range(NUM_REQUESTS):
    json.loads(json_string)
  deserialize_time = time.time() - start_time
  return serialize_time, deserialize_time
async def run_benchmark(server_func, server_name):
  # Start server
  if asyncio.iscoroutinefunction(server_func):
    await server_func()
  else:
    asyncio.get_event_loop().run_in_executor(None, server_func)
  await asyncio.sleep(1)  # Wait for server to start

  # Measure JSON operations
  serialize_time, deserialize_time = measure_json_ops()

  # Measure HTTP JSON handling
  async with aiohttp.ClientSession() as session:
    http_time = await json_benchmark(session, f"http://localhost:{PORT}")
  print(f"{server_name} JSON Handling Benchmark Results:")
  print(f"JSON Serialization: {serialize_time:.4f} seconds for {NUM_REQUESTS} operations")
  print(f"JSON Deserialization: {deserialize_time:.4f} seconds for {NUM_REQUESTS} operations")
  print(f"HTTP JSON Round-trip: {http_time:.4f} seconds for {NUM_REQUESTS} requests")
  print(f"Total JSON size: {len(json.dumps(LARGE_JSON))} bytes")
  print("-----------------------------")
async def main():
  # Run aiohttp benchmark
  await run_benchmark(run_aiohttp_server, "aiohttp")
  await asyncio.sleep(2)  # Wait between benchmarks

  # Run Tornado benchmark
  await run_benchmark(run_tornado_server, "Tornado")
if __name__ == "__main__":
  asyncio.run(main())

Output:

aiohttp server started
aiohttp JSON Handling Benchmark Results:
JSON Serialization: 13.2391 seconds for 10000 operations
JSON Deserialization: 10.6222 seconds for 10000 operations
HTTP JSON Round-trip: 192.3151 seconds for 10000 requests
Total JSON size: 31136 bytes
-----------------------------
Tornado server started
Tornado JSON Handling Benchmark Results:
JSON Serialization: 6.6087 seconds for 10000 operations
JSON Deserialization: 4.5010 seconds for 10000 operations
HTTP JSON Round-trip: 199.5136 seconds for 10000 requests
Total JSON size: 31136 bytes
-----------------------------

Tornado significantly outperforms aiohttp in both serialization and deserialization. It’s about twice as fast for these operations.

HTTP JSON Handling: The performance is very close, with aiohttp having a slight edge.

However, the difference (about 7 seconds over 10,000 requests) is not as significant as the difference in raw JSON operations.

 

Caching Mechanisms

aiohttp doesn’t provide built-in caching, but you can use third-party libraries like aiocache:

import time
from aiohttp import web
from aiocache import cached, Cache
from aiocache.serializers import JsonSerializer

@cached(ttl=30, cache=Cache.MEMORY, serializer=JsonSerializer())
async def get_time_data():
  return {"time": time.time()}
async def handle(request):
  data = await get_time_data()
  return web.json_response(data)
app = web.Application()
app.router.add_get('/', handle)
web.run_app(app)

Output:

{
    "time": 1725551465.7373197
}

This aiohttp code uses aiocache to cache the response for 30 seconds.

The cached data is stored in memory and serialized as JSON.

Tornado provides a simple dictionary-based cache:

import tornado.ioloop
import tornado.web
import time
class MainHandler(tornado.web.RequestHandler):
    cache = {}
    def get(self):
        if 'time' not in self.cache or time.time() - self.cache['time']['timestamp'] > 30:
            self.cache['time'] = {
                'value': time.time(),
                'timestamp': time.time()
            }
        self.write(str(self.cache['time']['value']))
app = tornado.web.Application([
    (r"/", MainHandler),
])
if __name__ == "__main__":
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

Output:

1725551571.2550669

This Tornado code implements a simple in-memory cache that stores the time for 30 seconds before refreshing.

Leave a Reply

Your email address will not be published. Required fields are marked *