Performance benchmarks
faster
than FastAPI.
Apple M3 Pro  ·  Python 3.14t free-threaded  ·  wrk 4T/100C/10s
Requests / sec
47,832req/s
vs 6,847 FastAPI
Avg latency
2.09ms
vs 14.6ms FastAPI
Cold start
5ms
vs 800ms FastAPI
Memory / load
12MB
vs 72MB FastAPI
Head to head
Requests per second
TurboAPI
47,832
Starlette
9,201
FastAPI
6,847
Flask
4,312
All metrics
Full benchmark breakdown

Requests / second

Higher is better

Avg latency (ms)

Lower is better

Cold start (ms)

Lower is better

Memory under load (MB)

Lower is better

Methodology

wrk · 4 threads · 100 connections · 10s duration.
Single GET / returning JSON. TurboAPI on Python 3.14t (free-threaded).
FastAPI / Starlette / Flask on uvicorn + CPython 3.12.
Numbers are local — your hardware will differ.

View source on GitHub →