-
-
Save robinglen/3eea038b9f0c5e94de73f3c3482fa732 to your computer and use it in GitHub Desktop.
const cluster = require('cluster'); | |
const numCPUs = require('os').cpus().length; | |
const fastify = require('fastify')(); | |
const port = 8123; | |
if (cluster.isMaster) { | |
console.log(`Master ${process.pid} is running`); | |
for (let i = 0; i < numCPUs; i++) { | |
cluster.fork(); | |
} | |
cluster.on('exit', worker => { | |
console.log(`Worker ${worker.process.pid} died`); | |
}); | |
} else { | |
fastify.get('*', (req, res) => { | |
res.send('Hello World'); | |
}); | |
fastify.listen(port, () => { | |
console.log(`Fastify "Hello World" listening on port ${port}, PID: ${process.pid}`); | |
}); | |
} |
Not exactly a benchmark, but an idea from a project I'm starting:
Single threaded (vanilla fastily): 13K requests/second
In cluster mode with 8 threads: 53K requests/second
The server is a graphql instance (based on mercurius/fastify) with no I/O like database access or any significant computations. So I'm pretty sure that these numbers converge a lot as soon as the server becomes more I/O bound.
Hi @thearegee
this is app.js
how do i implement clustering for the same
any pointers
First, I'm sorry sorry it took me so long to reply to this - I didn't even know anyone had commented here!
@avidcoder123 and @johannesfritsch if interested here is a blog I wrote sometime ago that uses this:
https://medium.com/ynap-tech/microservices-solving-a-problem-like-routing-part-2-e197cdd1863c
This includes all my benchmarks and again really sorry.
Or if you want to read the completed article:
https://medium.com/ynap-tech/microservices-solving-a-problem-like-routing-2020-update-e623adcc3fc1
@matt212 to be honest, I wouldn't really recommend doing what I did above for anything other than benchmarking - it was a very crude way to cluster the server instance. If a process died it wouldn't handle it gracefully, if you really want to have multiple clusters running on one instance for production like traffic look at something like:
https://pm2.keymetrics.io/
Or if it was me, keep node single threaded and run multiple lightweight pods in K8s.
If this doesn't answer your questions and you just want some code help let me know.
@thearegee
thanks for indept explanation about clustering
do you have any walkthroughs for
node single threaded and run multiple lightweight pods in K8s.
thanks in advance cheers !
for time i will opt for pm2 clustering as suggested
@matt212 so I don't know much about your setup and you're needs but from a DevOps / SRE perspective I don't recommend clustering node in production. You might get improved throughput and potentially cost savings but that will be outweighed by the complexity operating it.
So what I was suggesting was creating lightweight docker image for your service:
https://itnext.io/lightweight-and-performance-dockerfile-for-node-js-ec9eed3c5aef
However don't run node from npm
in docker use node
to save RAM.
https://medium.com/trendyol-tech/how-we-reduce-node-docker-image-size-in-3-steps-ff2762b51d5a
So you're using as little resource as possible, then these can be run anywhere but I was also suggesting using Kubernetes as a way to easily deploy and scale your services, I'm not sure on your infrastructure though.
Here is an article I wrote about how and why we used K8s at my old place:
https://medium.com/ynap-tech/beyond-gitops-how-we-release-our-microservices-on-kubernetes-at-ynap-683617cfd3cc
Hope this helps
@thearegee
Thanks again for detailed explanation .
to answer your question.
I currently dont have any infra setup I was researching
what is best to deploy , maintain and scale a normal b2b nodejs postgres platform and this gist came across :)
here is my nodejs postgres platform
https://github.com/matt212/Nodejs_Postgresql_VanillaJS_Fastify
i would look into docker and K8 as you have stated earlier thanks for that
that being said , Any suggestion pointers to scale the boilerplate platform is welcome :)
Interesting code, is there a benchmark for this server in comparison to vanilla fastify/express?