Request batching is a Node.js pattern that can be used to optimise how a web server handles identical concurrent requests.
It allows to process the request only once for all the clients concurrently requesting the same information. This can save expensive round trips to backend services and can avoid backend services to be overloaded with many concurrent identical requests. In a way it's a more specialised version of micro-caching.
This can lead to significant performance improvement in cases where there are many concurrent users requesting the same page.
The implementation here exploits the use of promises to track pendind requests. If the same request is made while there is already one pending, the same pending promise will be awaited for the new request, which saves additional requests to backend services.
Once the promise resolves, all the client awaiting a responde will receive it.
To run the benchmarks you can use autocannon
:
npx autocannon -c 20 -d 20 --on-port /api/hotels/rome -- node server.mjs
To compare the results with a version that does not use request batching just comment line 19 and re-run the command above.
ℹ️ Note This is an artificial benchmark and results might vary significantly in real-life scenario. Always run your own benchmarks before deciding whether this optimization can have a positive effect for you.