Skip to content

Instantly share code, notes, and snippets.

@membphis
Last active August 5, 2024 11:20
Show Gist options
  • Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Save membphis/137db97a4bf64d3653aa42f3e016bd01 to your computer and use it in GitHub Desktop.
Apache apisix benchmark script: https://github.com/iresty/apisix/blob/master/benchmark/run.sh
Kong beanchmark script:
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=example-service' \
--data 'host=127.0.0.1'
curl -i -X POST \
--url http://localhost:8001/services/example-service/routes \
--data 'paths[]=/hello'
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=rate-limiting" \
--data "config.hour=999999999999" \
--data "config.policy=local"
curl -i -X POST http://localhost:8001/routes/efd9d857-39bf-4154-85ec-edb7c1f53856/plugins \
--data "name=prometheus"
curl -i http://127.0.0.1:8000/hello/hello
wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
@membphis
Copy link
Author

Apache APISIX 1 worker

1 upstream + 0 plugin

+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   532.22us   71.04us   2.29ms   88.84%
    Req/Sec    15.05k   642.34    15.66k    88.24%
  152845 requests in 5.10s, 608.27MB read
Requests/sec:  29968.79
Transfer/sec:    119.26MB

+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   524.75us   80.04us   2.52ms   88.82%
    Req/Sec    15.28k     1.74k   31.56k    98.02%
  153495 requests in 5.10s, 610.85MB read
Requests/sec:  30097.07
Transfer/sec:    119.78MB

1 upstream + 2 plugins (limit-count + prometheus)

+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"plugins":{"limit-count":{"time_window":60,"rejected_code":503,"count":2000000000000,"key":"remote_addr"},"prometheus":{}},"uri":"\/hello","upstream":{"nodes":{"127.0.0.1:80":1},"type":"roundrobin"}},"createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"prevNode":{"value":"{\"plugins\":{},\"uri\":\"\\\/hello\",\"upstream\":{\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"}}","createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   635.84us  100.79us   5.77ms   91.94%
    Req/Sec    12.63k   479.11    13.17k    77.45%
  128182 requests in 5.10s, 518.92MB read
Requests/sec:  25134.05
Transfer/sec:    101.75MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   627.25us   81.61us   2.44ms   91.29%
    Req/Sec    12.79k   162.97    13.17k    64.00%
  127248 requests in 5.00s, 515.14MB read
Requests/sec:  25448.75
Transfer/sec:    103.02MB
+ sleep 1
+ make stop
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 1 worker'

fake gateway server

+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 1/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /tmp/apisix/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   452.33us   44.81us   1.71ms   89.29%
    Req/Sec    17.70k   263.40    18.22k    79.41%
  179701 requests in 5.10s, 715.14MB read
Requests/sec:  35235.82
Transfer/sec:    140.23MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   451.06us   60.64us   2.35ms   88.36%
    Req/Sec    17.77k     1.94k   35.85k    97.03%
  178465 requests in 5.10s, 710.23MB read
Requests/sec:  34999.03
Transfer/sec:    139.28MB
+ sudo openresty -p /tmp/apisix/benchmark/fake-apisix -s stop
+ sudo openresty -p /tmp/apisix/benchmark/server -s stop

apisix 4 workers

root@iZ8vbexngtid3sf5c0m1wrZ:/tmp/apisix# ./benchmark/run.sh 4
+ '[' -n 4 ']'
+ worker_cnt=4
+ mkdir -p benchmark/server/logs
+ mkdir -p benchmark/fake-apisix/logs
+ sudo openresty -p /tmp/apisix/benchmark/server
+ trap onCtrlC INT
+ sed -i 's/worker_processes [0-9]*/worker_processes 4/g' conf/nginx.conf
+ make run
mkdir -p logs
mkdir -p /tmp/apisix_cores/
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf
+ sleep 3
+ echo -e '\n\napisix: 4 worker + 1 upstream + 0 plugin'

Apache APISIX: 1 upstream + 0 plugin

+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"plugins":{},"uri":"\/hello","upstream":{"nodes":{"127.0.0.1:80":1},"type":"roundrobin"}},"createdIndex":10,"key":"\/apisix\/routes\/1","modifiedIndex":10},"prevNode":{"value":"{\"plugins\":{\"limit-count\":{\"time_window\":60,\"rejected_code\":503,\"count\":2000000000000,\"key\":\"remote_addr\"},\"prometheus\":{}},\"uri\":\"\\\/hello\",\"upstream\":{\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"}}","createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   175.63us   64.20us   3.42ms   89.98%
    Req/Sec    44.93k     1.49k   47.31k    64.71%
  456027 requests in 5.10s, 1.77GB read
Requests/sec:  89428.09
Transfer/sec:    355.89MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   176.41us   67.29us   3.42ms   89.80%
    Req/Sec    44.85k     3.47k   72.34k    98.02%
  450608 requests in 5.10s, 1.75GB read
Requests/sec:  88369.52
Transfer/sec:    351.68MB
+ sleep 1
+ echo -e '\n\napisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


Apache APISIX: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"plugins":{"limit-count":{"time_window":60,"rejected_code":503,"count":2000000000000,"key":"remote_addr"},"prometheus":{}},"uri":"\/hello","upstream":{"nodes":{"127.0.0.1:80":1},"type":"roundrobin"}},"createdIndex":11,"key":"\/apisix\/routes\/1","modifiedIndex":11},"prevNode":{"value":"{\"plugins\":{},\"uri\":\"\\\/hello\",\"upstream\":{\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"}}","createdIndex":10,"key":"\/apisix\/routes\/1","modifiedIndex":10},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   219.48us  101.73us   4.40ms   94.74%
    Req/Sec    36.57k     1.45k   38.90k    87.25%
  371112 requests in 5.10s, 1.47GB read
Requests/sec:  72767.44
Transfer/sec:    294.58MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   215.35us  102.80us   3.94ms   94.74%
    Req/Sec    37.22k     1.55k   40.25k    79.00%
  370349 requests in 5.00s, 1.46GB read
Requests/sec:  74067.79
Transfer/sec:    299.85MB
+ sleep 1
+ make stop
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 4 worker'

fake empty Apache APISIX: 4 worker

+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 4/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /tmp/apisix/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   157.65us   90.72us   4.31ms   98.12%
    Req/Sec    50.47k     1.55k   52.84k    83.33%
  512302 requests in 5.10s, 1.99GB read
Requests/sec: 100460.91
Transfer/sec:    399.80MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   155.41us   86.34us   4.21ms   97.99%
    Req/Sec    50.85k     1.23k   52.55k    87.00%
  505867 requests in 5.00s, 1.97GB read
Requests/sec: 101170.34
Transfer/sec:    402.62MB
+ sudo openresty -p /tmp/apisix/benchmark/fake-apisix -s stop
+ sudo openresty -p /tmp/apisix/benchmark/server -s stop

kong 1 worker

1 upstream + 0 plugin

 wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.17ms  480.23us   6.10ms   91.30%
    Req/Sec     7.09k     1.05k    8.25k    70.59%
  71946 requests in 5.10s, 292.22MB read
Requests/sec:  14106.70
Transfer/sec:     57.30MB
[root@iZ8vbedj2lge8ep0h7jnm9Z mirror_wrk]# wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.17ms  485.92us   5.49ms   90.77%
    Req/Sec     7.08k     1.05k    8.35k    68.63%
  71844 requests in 5.10s, 291.81MB read
Requests/sec:  14088.04
Transfer/sec:     57.22MB

1 upstream + 2 plugins (limit rate + prometheus)

wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.31ms    1.73ms  21.80ms   92.89%
    Req/Sec     1.28k   164.75     1.45k    74.00%
  12736 requests in 5.00s, 52.70MB read
Requests/sec:   2546.29
Transfer/sec:     10.54MB

wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.24ms    1.76ms  22.99ms   92.76%
    Req/Sec     1.30k   165.74     1.45k    78.00%
  12923 requests in 5.00s, 53.48MB read
Requests/sec:   2582.91
Transfer/sec:     10.69MB

kong 4 worker

1 upstream + 0 plugin

wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   376.60us  186.78us   5.15ms   90.59%
    Req/Sec    21.74k     2.19k   25.45k    61.00%
  216135 requests in 5.00s, 0.86GB read
Requests/sec:  43224.72
Transfer/sec:    175.56MB
[root@iZ8vbedj2lge8ep0h7jnm9Z mirror_wrk]# wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   366.94us  184.11us   9.25ms   91.89%
    Req/Sec    22.03k     1.77k   25.55k    65.69%
  223652 requests in 5.10s, 0.89GB read
Requests/sec:  43858.30
Transfer/sec:    178.14MB

1 upstream + 2 plugins (limit rate + prometheus)

$ wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.19ms    1.16ms  11.57ms   72.12%
    Req/Sec     3.73k   675.06     7.63k    77.23%
  37496 requests in 5.10s, 155.16MB read
Requests/sec:   7353.02
Transfer/sec:     30.43MB

wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.13ms    1.18ms  10.73ms   71.28%
    Req/Sec     3.87k   633.77     6.48k    75.25%
  38849 requests in 5.10s, 160.76MB read
Requests/sec:   7617.61
Transfer/sec:     31.52MB

@cboitel
Copy link

cboitel commented Jun 23, 2020

Any update with latest Kong 2.x ? Would like to know if it did improve perfs over the 1.x branch

@membphis
Copy link
Author

welcome everyone to run the benchmark script again for collecting the latest data.

@cboitel
Copy link

cboitel commented Jun 23, 2020

Tests performed on a 4 core machines with 8Go RAM on CentOS 7

APISIX: 1 worker

$ benchmark/run.sh 1
+ '[' -n 1 ']'
+ worker_cnt=1
+ mkdir -p benchmark/server/logs
+ mkdir -p benchmark/fake-apisix/logs
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/server
+ trap onCtrlC INT
++ uname
+ [[ Linux == \D\a\r\w\i\n ]]
+ sed -i 's/worker_processes .*/worker_processes 1;/g' conf/nginx.conf
+ make run
mkdir -p logs
mkdir -p /tmp/apisix_cores/
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf
+ sleep 3
+ echo -e '\n\napisix: 1 worker + 1 upstream + no plugin'

apisix: 1 worker + 1 upstream + no plugin

+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":14,"key":"\/apisix\/routes\/1","modifiedIndex":14},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.96ms  153.58us   5.47ms   87.57%
    Req/Sec     8.33k     0.93k   16.53k    94.06%
  83681 requests in 5.10s, 334.22MB read
Requests/sec:  16407.59
Transfer/sec:     65.53MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.88ms   99.07us   2.74ms   82.28%
    Req/Sec     9.16k   513.13    10.16k    61.76%
  93010 requests in 5.10s, 371.48MB read
Requests/sec:  18235.24
Transfer/sec:     72.83MB
+ sleep 1

apisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)

+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":15,"key":"\/apisix\/routes\/1","modifiedIndex":15},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":14,"key":"\/apisix\/routes\/1","modifiedIndex":14},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.01ms  211.85us  12.64ms   96.02%
    Req/Sec     7.97k   178.27     8.27k    65.69%
  80803 requests in 5.10s, 328.27MB read
Requests/sec:  15845.30
Transfer/sec:     64.37MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.97ms   99.65us   3.93ms   89.62%
    Req/Sec     8.25k   255.08     8.74k    61.76%
  83709 requests in 5.10s, 340.08MB read
Requests/sec:  16413.89
Transfer/sec:     66.68MB
+ sleep 1
+ make stop
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 1 worker'

fake empty apisix server: 1 worker

+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 1/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   731.28us   79.56us   4.61ms   92.03%
    Req/Sec    10.96k   382.68    11.56k    70.59%
  111232 requests in 5.10s, 442.66MB read
Requests/sec:  21811.99
Transfer/sec:     86.80MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   687.87us  113.46us   3.80ms   69.70%
    Req/Sec    11.65k     1.45k   14.00k    50.98%
  118292 requests in 5.10s, 470.77MB read
Requests/sec:  23195.88
Transfer/sec:     92.31MB
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/server -s stop

APISIX: 2 workers

$ benchmark/run.sh 2
+ '[' -n 2 ']'
+ worker_cnt=2
+ mkdir -p benchmark/server/logs
+ mkdir -p benchmark/fake-apisix/logs
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/server
+ trap onCtrlC INT
++ uname
+ [[ Linux == \D\a\r\w\i\n ]]
+ sed -i 's/worker_processes .*/worker_processes 2;/g' conf/nginx.conf
+ make run
mkdir -p logs
mkdir -p /tmp/apisix_cores/
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf
+ sleep 3
+ echo -e '\n\napisix: 2 worker + 1 upstream + no plugin'

apisix: 2 worker + 1 upstream + no plugin

+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":14,"key":"\/apisix\/routes\/1","modifiedIndex":14},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   459.14us  266.91us   6.31ms   72.42%
    Req/Sec    17.60k     2.19k   23.09k    70.59%
  178578 requests in 5.10s, 713.23MB read
Requests/sec:  35014.32
Transfer/sec:    139.85MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   444.42us  218.20us   9.50ms   66.87%
    Req/Sec    17.98k     1.63k   21.24k    71.00%
  178797 requests in 5.00s, 714.11MB read
Requests/sec:  35757.58
Transfer/sec:    142.81MB
+ sleep 1
+ echo -e '\n\napisix: 2 worker + 1 upstream + 2 plugins (limit-count + prometheus)'

apisix: 2 worker + 1 upstream + 2 plugins (limit-count + prometheus)

+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":15,"key":"\/apisix\/routes\/1","modifiedIndex":15},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":14,"key":"\/apisix\/routes\/1","modifiedIndex":14},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   512.45us  302.42us   9.55ms   73.82%
    Req/Sec    15.76k     1.53k   18.74k    66.67%
  159909 requests in 5.10s, 649.65MB read
Requests/sec:  31353.04
Transfer/sec:    127.38MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   535.82us  297.76us   6.05ms   67.20%
    Req/Sec    15.03k     2.30k   18.93k    66.67%
  152555 requests in 5.10s, 619.77MB read
Requests/sec:  29913.36
Transfer/sec:    121.53MB
+ sleep 1
+ make stop
/usr/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 2 worker'

fake empty apisix server: 2 worker

+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 2/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   359.06us  307.97us   9.45ms   97.39%
    Req/Sec    23.00k     2.07k   26.75k    79.41%
  233378 requests in 5.10s, 0.91GB read
Requests/sec:  45765.31
Transfer/sec:    182.13MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   346.99us  251.18us  10.34ms   98.00%
    Req/Sec    23.28k     1.46k   25.37k    73.53%
  236258 requests in 5.10s, 0.92GB read
Requests/sec:  46324.65
Transfer/sec:    184.36MB
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/centos/incubator-apisix/benchmark/server -s stop

APISIX results with proxy_cache disabled

This was just to confirm it had no effect on results: cache is enabled in nginx but left unused since proxy-cache plugin isn't enabled

@cboitel
Copy link

cboitel commented Jun 23, 2020

Kong 2.0.4 on 4 CPU, 8GO RAM, CentOS 7 machine

1 worker no plugin

Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.20ms  344.93us   5.48ms   94.46%
    Req/Sec     6.75k     1.07k    8.65k    78.43%
  68521 requests in 5.10s, 278.31MB read
Requests/sec:  13434.81
Transfer/sec:     54.57MB
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.17ms  408.88us   6.31ms   94.21%
    Req/Sec     7.05k     1.20k    8.53k    84.31%
  71515 requests in 5.10s, 290.47MB read
Requests/sec:  14021.86
Transfer/sec:     56.95MB

around 4000 reqs/s less than APISIX

Kong 2 workers

Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   607.65us  376.94us   4.93ms   74.83%
    Req/Sec    13.68k     1.92k   18.16k    68.63%
  138779 requests in 5.10s, 563.68MB read
Requests/sec:  27212.36
Transfer/sec:    110.53MB
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   632.50us  430.31us   7.76ms   81.37%
    Req/Sec    13.37k     2.64k   18.50k    72.55%
  135618 requests in 5.10s, 550.84MB read
Requests/sec:  26593.03
Transfer/sec:    108.01MB

still 4000 reqs/s behing APISIX

@cboitel
Copy link

cboitel commented Jun 23, 2020

Looks like APISIX is displaying way less performance than previously:

  • 1 worker, 0 plugin: used to be near 30 000 req/s now 18000 req/s
  • 1 worker, 2 plugins: used to be near 25 000 req/s now 16000 req/s

At the same time:

  • fake improved from 35000 req/s to 45000 req/s (1 worker)
  • kong remained stable with no plugin

Will publish kong with 2 plugins setup soon

@cboitel
Copy link

cboitel commented Jun 23, 2020

Kong with 2 plugins

1 worker, 2 plugins

Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.80ms    1.51ms  21.36ms   93.28%
    Req/Sec     1.03k   158.26     2.03k    91.09%
  10373 requests in 5.10s, 43.80MB read
Requests/sec:   2033.86
Transfer/sec:      8.59MB
[centos@cboitel1 ~]$ wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello; wrk -d 5 -c 16 http://127.0.0.1:8000/hello/hello
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.10ms    1.74ms  20.82ms   94.07%
    Req/Sec     0.99k   126.75     1.11k    84.00%
  9891 requests in 5.00s, 41.76MB read
Requests/sec:   1977.53
Transfer/sec:      8.35MB
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.76ms    1.81ms  21.04ms   94.48%
    Req/Sec     1.04k   160.35     1.15k    87.00%
  10352 requests in 5.00s, 43.70MB read
Requests/sec:   2068.92
Transfer/sec:      8.73MB

2 workers, 2 plugins

Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.33ms    1.83ms  18.91ms   74.33%
    Req/Sec     1.88k   261.92     2.58k    84.31%
  19104 requests in 5.10s, 80.66MB read
Requests/sec:   3744.91
Transfer/sec:     15.81MB
Running 5s test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.21ms    1.57ms  16.08ms   76.31%
    Req/Sec     1.92k   320.04     2.91k    75.25%
  19332 requests in 5.10s, 81.62MB read
Requests/sec:   3790.61
Transfer/sec:     16.00MB

Again huge performance drop with the 2 plugins scenarios : APISIX remains way ahead but now display a x8 factor instead of x10.

@membphis
Copy link
Author

Tests performed on a 4 core machines with 8Go RAM on CentOS 7

what is the version of the APISIX?

@membphis
Copy link
Author

and I think you can use a table to collect all of the result.

@cboitel
Copy link

cboitel commented Jun 24, 2020

Tests performed on a 4 core machines with 8Go RAM on CentOS 7

what is the version of the APISIX?

latest git clone of yesterday. Will perform tests with latest official release.

@cboitel
Copy link

cboitel commented Jun 24, 2020

and I think you can use a table to collect all of the result.

Will do as i was able to update my test machines to 8 CPU/16Gb RAM.

@membphis
Copy link
Author

@cboitel we'd better use latest official release (both apisix and kong).

we should run them on a cloud ENV, eg AWS, Google or Alibaba Cloud.
Then the other users are easy to confirm if our benchmark result is correct, that is very important.

@cboitel
Copy link

cboitel commented Jun 24, 2020

Improved tests for Kong

Global setup

  • Machine: VM with 8 CPU, 16Gb RAM, 100Gb Disk
  • OS:
    • CentOS 7 with latest patches installed (sudo yum update)
    • killall command installed (sudo yum install -y psmisc)
    • login with sudo priviledges (required for we installed software via RPM)

Kong setup

  • installed from Kong's RPM : see https://docs.konghq.com/install/centos/
  • db-less mode enabled and nginx_worker_processes defined (commented by default: default value is auto)
    • more simple to setup
    • don't expect performance issues for an in-memory cache is used
      # save original kong configuration file
      sudo cp -p /etc/kong/kong.conf /etc/kong/kong.conf.original
      
      # enable db-less mode
      cat << __EOF__ | sudo tee -a /etc/kong/kong.conf 
      # enable db-less mode
      database = off
      declarative_config = /etc/kong/kong.yml
      # default work setting
      nginx_worker_processes = auto
      __EOF__
      
  • install script used to run Kong tests:
    • run similar to run.sh from apisix with a few changes
    • tune reference (server provided in incubator-apisix/benchmark/server) server to use half of available CPUs
    • added performance tests of reference server to measure its raw performance
    • kong is reloaded between run (due to db-less mode and start with clean setup)
    • add uptime after each wrk to check load average over the last minute
    • wrk duration set to 60s (so we can check load average never exceeded the number of CPU available over the last minute)
tee kong-run.sh << __EOS__ && chmod +x kong-run.sh
#! /bin/bash -x 

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

let maxprocs=$(nproc --all)
let maxworkers=\$maxprocs/2
let worker_cnt=\$maxworkers
[ -n "\$1" ] && worker_cnt=\$1
echo -e "> worker_cnt=\$worker_cnt"
[ \$worker_cnt -gt \$maxworkers ] && echo -e "WARNING: workers count should not exit $maxworkers (half of CPUs)"

echo -e "> applying configuration to /etc/kong/kong.conf"
[ ! -e /etc/kong/kong.conf ] && echo -e "FATAL: /etc/kong/kong.conf file missing !" && exit 1
if [[ "\$(uname)" == "Darwin" ]]; then
    sudo sed  -i "" "s/^nginx_worker_processes\s*=.*/nginx_worker_processes=\$worker_cnt/g" /etc/kong/kong.conf || exit 1
else
    sudo sed  -i "s/^nginx_worker_processes\s*=.*/nginx_worker_processes=\$worker_cnt/g" /etc/kong/kong.conf || exit 1
fi

function cleanUp () {
    echo -e "> cleanup any wrk process: "
    pgrep -a wrk && sudo killall wrk
    echo -e "> cleanup any openresty process: " 
    pgrep -a openresty && sudo killall openresty
    echo -e "> cleanup any nginx process: " 
    pgrep -a nginx && sudo killall nginx
}
trap 'cleanUp' INT

function doWrk () {
        wrk -d 60 -c 16 \$1 && uptime
}

echo -e "\n\n#############################################\n> cleanup things just in case"
cleanUp

echo -e "> ensure a clone of incubator-apisix is available"
[ ! -d incubator-apisix ] && (git clone https://github.com/apache/incubator-apisix || exit 1)

echo -e "\n\n#############################################\n> starting benchmark server" 
echo -e "> adjusting server workers to \$worker_cnt"
if [[ "\$(uname)" == "Darwin" ]]; then
    sudo sed  -i "" "s/^worker_processes\s.*/worker_processes \$worker_cnt;/g" \$PWD/incubator-apisix/benchmark/server/conf/nginx.conf || exit 1
else
    sudo sed  -i "s/^worker_processes .*/worker_processes \$worker_cnt;/g" \$PWD/incubator-apisix/benchmark/server/conf/nginx.conf || exit 1
fi
mkdir -p incubator-apisix/benchmark/server/logs && sudo /usr/local/openresty/bin/openresty -p \$PWD/incubator-apisix/benchmark/server || exit 1
sleep 3
echo -e "> openresty processes:" && pgrep -a openresty &&
echo -e "> curling server:" && curl -i http://127.0.0.1:80/hello && 
echo -e "> running performance tests:" && doWrk http://127.0.0.1:80/hello

echo -e "\n\n#############################################\nkong: \$worker_cnt worker + 1 upstream + no plugin"

# setup kong configuration
cat << __EOF__ | sudo tee /etc/kong/kong.yml || exit 1
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
__EOF__
# restart kong to ensure clean setup
sudo systemctl stop kong && sleep 2 && sudo systemctl start kong && sleep 2 && 
echo -e "> nginx processes:" && pgrep -a nginx &&
echo -e "> curling service:" && curl -i http://127.0.0.1:8000/hello/hello && 
echo -e "> running performance tests:" && doWrk http://127.0.0.1:8000/hello/hello 

echo -e "\n\n#############################################\nkong: \$worker_cnt worker + 1 upstream + 2 plugins (limit-count + prometheus)"

# setup kong configuration
cat << __EOF__ | sudo tee /etc/kong/kong.yml || exit 1
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
__EOF__

# restart kong to ensure clean setup
sudo systemctl stop kong && sleep 2 && sudo systemctl start kong && sleep 2 && 
echo -e "> nginx processes:" && pgrep -a nginx &&
echo -e "> curling service:" && curl -i http://127.0.0.1:8000/hello/hello && 
echo -e "> running performance tests:" && doWrk http://127.0.0.1:8000/hello/hello 

echo -e "\n\n#############################################\n> stopping benchmark server & kong"
sudo /usr/local/openresty/bin/openresty -p \$PWD/incubator-apisix/benchmark/server -s stop || exit 1
sudo systemctl stop kong || exit 1
echo -e "> left openresty processes:" && pgrep -a openresty
echo -e "> left wrk processes:" && pgrep -a wrk
__EOS__

Notes:

  • in tests runs, i had modified my script to:
    • set the number of workers for benchmark server to a fix value (half number of cpus) instead of settings to the same number of workers in kong
    • it didn't change the game

Running kong benchmarks

Running for a number of worker starting at 1 and doubling but never exceeding the number of CPU on machine /2 :

let maxworkers=$(nproc --all)/2; let i=1; while [ $i -le $maxworkers ]; do ./kong-run.sh $i > kong-run-${i}-worker.out; let i=$i*2; done

See attached result files

Summary:

Req/s Workers Req/s Avg Stdev Max
benchmark server 1 44192.10 363.55us 133.65us 8.16ms
benchmark server 2 86072.74 181.45us 30.60us 1.80ms
benchmark server 4 138865.40 85.16us 57.52us 7.46ms
kong no plugin 1 14104.31 1.16ms 446.01us 16.20ms
kong no plugin 2 25611.06 664.92us 476.94us 10.43ms
kong no plugin 4 44990.11 395.17us 346.85us 10.87ms
kong 2 plugins 1 1947.98 8.23ms 1.85ms 27.94ms
kong 2 plugins 2 3546.25 4.63ms 2.74ms 35.79ms
kong 2 plugins 4 6153.83 2.71ms 1.80ms 23.27ms
  1. benchmark server (a simple openresty app):
    • shows excellent performance and scalability
    • as such, it is not a bottleneck in our tests
  2. kong shows similar scalability but drops your throughput significantly and adds latency
    • you process 3 times less with no plugins enabled
    • you process more than 20 times less with rate+prometheus plugins enabled
    • it also adds latency and increases response time variation by the same factors

detailed output results

kong-run-1-worker.out

> worker_cnt=1
> applying configuration to /etc/kong/kong.conf


#############################################
> cleanup things just in case
> cleanup any wrk process: 
> cleanup any openresty process: 
> cleanup any nginx process: 
> ensure a clone of incubator-apisix is available


#############################################
> starting benchmark server
> adjusting server workers to 4
> openresty processes:
2162 nginx: master process /usr/local/openresty/bin/openresty -p /home/centos/incubator-apisix/benchmark/server
2163 nginx: worker process                                                               
2164 nginx: worker process                                                               
2165 nginx: worker process                                                               
2166 nginx: worker process                                                               
> curling server:
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:11:22 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:80/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    84.51us   35.08us   2.96ms   69.75%
    Req/Sec    69.76k     4.49k   84.13k    65.72%
  8345719 requests in 1.00m, 32.44GB read
Requests/sec: 138862.33
Transfer/sec:    552.75MB
 15:12:22 up 37 min,  1 user,  load average: 2.97, 0.87, 0.51


#############################################
kong: 1 worker + 1 upstream + no plugin
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
> nginx processes:
2209 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2219 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:12:26 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.15ms  405.87us  10.98ms   93.89%
    Req/Sec     7.15k     1.19k   14.10k    69.94%
  854936 requests in 1.00m, 3.39GB read
Requests/sec:  14225.26
Transfer/sec:     57.78MB
 15:13:26 up 39 min,  1 user,  load average: 1.96, 0.98, 0.57


#############################################
kong: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
> nginx processes:
2267 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2276 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:13:31 GMT
X-RateLimit-Limit-Hour: 999999999999
RateLimit-Limit: 999999999999
X-RateLimit-Remaining-Hour: 999999999998
RateLimit-Remaining: 999999999998
RateLimit-Reset: 2789
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 3
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.05ms    1.81ms  27.75ms   93.76%
    Req/Sec     1.00k   141.86     1.13k    88.08%
  119578 requests in 1.00m, 504.96MB read
Requests/sec:   1992.20
Transfer/sec:      8.41MB
 15:14:31 up 40 min,  2 users,  load average: 1.43, 1.00, 0.61


#############################################
> stopping benchmark server & kong
> left openresty processes:
> left wrk processes:

kong-run-2-worker.out

> worker_cnt=2
> applying configuration to /etc/kong/kong.conf


#############################################
> cleanup things just in case
> cleanup any wrk process: 
> cleanup any openresty process: 
> cleanup any nginx process: 
> ensure a clone of incubator-apisix is available


#############################################
> starting benchmark server
> adjusting server workers to 4
> openresty processes:
2339 nginx: master process /usr/local/openresty/bin/openresty -p /home/centos/incubator-apisix/benchmark/server
2340 nginx: worker process                                                               
2341 nginx: worker process                                                               
2342 nginx: worker process                                                               
2343 nginx: worker process                                                               
> curling server:
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:14:34 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:80/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    84.86us   49.21us   8.41ms   91.63%
    Req/Sec    69.74k     6.01k   84.83k    65.97%
  8339220 requests in 1.00m, 32.42GB read
Requests/sec: 138755.04
Transfer/sec:    552.33MB
 15:15:34 up 41 min,  2 users,  load average: 3.50, 1.65, 0.85


#############################################
kong: 2 worker + 1 upstream + no plugin
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
> nginx processes:
2386 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2396 nginx: worker process                                                 
2397 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:15:39 GMT
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   648.25us  462.78us   9.90ms   79.43%
    Req/Sec    13.15k     2.61k   20.57k    69.75%
  1569932 requests in 1.00m, 6.23GB read
Requests/sec:  26165.35
Transfer/sec:    106.27MB
 15:16:39 up 42 min,  2 users,  load average: 3.03, 1.84, 0.97


#############################################
kong: 2 worker + 1 upstream + 2 plugins (limit-count + prometheus)
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
> nginx processes:
2444 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2453 nginx: worker process                                                 
2454 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:16:43 GMT
X-RateLimit-Limit-Hour: 999999999999
RateLimit-Limit: 999999999999
X-RateLimit-Remaining-Hour: 999999999998
RateLimit-Remaining: 999999999998
RateLimit-Reset: 2597
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.75ms    2.85ms  26.57ms   68.83%
    Req/Sec     1.73k   485.52     2.84k    65.00%
  207064 requests in 1.00m, 0.85GB read
Requests/sec:   3450.30
Transfer/sec:     14.57MB
 15:17:43 up 43 min,  2 users,  load average: 2.18, 1.82, 1.02


#############################################
> stopping benchmark server & kong
> left openresty processes:
> left wrk processes:

kong-run-4-worker.out

> worker_cnt=4
> applying configuration to /etc/kong/kong.conf


#############################################
> cleanup things just in case
> cleanup any wrk process: 
> cleanup any openresty process: 
> cleanup any nginx process: 
> ensure a clone of incubator-apisix is available


#############################################
> starting benchmark server
> adjusting server workers to 4
> openresty processes:
2496 nginx: master process /usr/local/openresty/bin/openresty -p /home/centos/incubator-apisix/benchmark/server
2497 nginx: worker process                                                               
2498 nginx: worker process                                                               
2499 nginx: worker process                                                               
2500 nginx: worker process                                                               
> curling server:
HTTP/1.1 200 OK
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:17:46 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:80/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    81.63us   34.61us   2.46ms   69.62%
    Req/Sec    72.40k     6.37k   85.67k    64.75%
  8645846 requests in 1.00m, 33.61GB read
Requests/sec: 144095.81
Transfer/sec:    573.59MB
 15:18:46 up 44 min,  2 users,  load average: 4.21, 2.48, 1.30


#############################################
kong: 4 worker + 1 upstream + no plugin
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
> nginx processes:
2542 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2552 nginx: worker process                                                 
2553 nginx: worker process                                                 
2554 nginx: worker process                                                 
2555 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:18:51 GMT
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 0
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   400.60us  358.62us  11.00ms   90.56%
    Req/Sec    22.36k     4.06k   32.62k    69.75%
  2669115 requests in 1.00m, 10.59GB read
Requests/sec:  44483.59
Transfer/sec:    180.68MB
 15:19:51 up 45 min,  2 users,  load average: 5.08, 3.03, 1.57


#############################################
kong: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)
_format_version: "1.1"

services:
- name: example-service
  host: 127.0.0.1
  routes:
  - paths:
    - /hello
  plugins:
  - name: rate-limiting
    config:
      hour: 999999999999
      policy: local
  - name: prometheus
> nginx processes:
2604 nginx: master process /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2613 nginx: worker process                                                 
2614 nginx: worker process                                                 
2615 nginx: worker process                                                 
2616 nginx: worker process                                                 
> curling service:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: openresty/1.15.8.3
Date: Wed, 24 Jun 2020 15:19:55 GMT
X-RateLimit-Limit-Hour: 999999999999
RateLimit-Limit: 999999999999
X-RateLimit-Remaining-Hour: 999999999998
RateLimit-Remaining: 999999999998
RateLimit-Reset: 2405
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 2
Via: kong/2.0.4

1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890> running performance tests:
Running 1m test @ http://127.0.0.1:8000/hello/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.65ms    1.79ms  20.89ms   74.32%
    Req/Sec     3.19k   786.40     5.93k    68.11%
  381803 requests in 1.00m, 1.57GB read
Requests/sec:   6352.66
Transfer/sec:     26.83MB
 15:20:55 up 46 min,  2 users,  load average: 4.41, 3.20, 1.72


#############################################
> stopping benchmark server & kong
> left openresty processes:
> left wrk processes:

@cboitel
Copy link

cboitel commented Jun 24, 2020

Will post tomorrow similar thing for APISIX 1.3.0 installed with RPM as well (will adapt my scripts/...).

Hope it helps.

@membphis
Copy link
Author

Here is my result:

platform: aliyun cloud, 8 vCPU 32 GiB ecs.hfg5.2xlarge
apisix version: apache/apisix@492fa71

# 1 worker

apisix: 1 worker + 1 upstream + no plugin
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":6,"key":"\/apisix\/routes\/1","modifiedIndex":6},"prevNode":{"value":"{\"priority\":0,\"plugins\":{\"limit-count\":{\"time_window\":60,\"count\":2000000000000,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"},\"prometheus\":{}},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":5,"key":"\/apisix\/routes\/1","modifiedIndex":5},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   692.12us  117.12us   4.72ms   89.93%
    Req/Sec    11.60k   350.91    12.10k    85.29%
  117717 requests in 5.10s, 470.15MB read
Requests/sec:  23082.99
Transfer/sec:     92.19MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   681.95us   96.32us   2.60ms   89.34%
    Req/Sec    11.76k   138.81    12.07k    73.53%
  119368 requests in 5.10s, 476.75MB read
Requests/sec:  23407.17
Transfer/sec:     93.49MB
+ sleep 1
+ echo -e '\n\napisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


apisix: 1 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":6,"key":"\/apisix\/routes\/1","modifiedIndex":6},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.86ms  162.46us   7.24ms   88.76%
    Req/Sec     9.33k     1.17k   19.40k    93.07%
  93769 requests in 5.10s, 380.95MB read
Requests/sec:  18389.21
Transfer/sec:     74.71MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   845.31us  144.46us   4.37ms   91.35%
    Req/Sec     9.50k   281.09     9.81k    90.20%
  96473 requests in 5.10s, 391.94MB read
Requests/sec:  18916.99
Transfer/sec:     76.85MB
+ sleep 1
+ make stop
/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 1 worker'


fake empty apisix server: 1 worker
+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 1/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   498.02us   57.61us   2.61ms   89.51%
    Req/Sec    16.09k   619.64    16.87k    85.29%
  163367 requests in 5.10s, 650.14MB read
Requests/sec:  32033.32
Transfer/sec:    127.48MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   492.73us   57.29us   2.35ms   88.62%
    Req/Sec    16.26k   385.65    17.01k    87.25%
  165027 requests in 5.10s, 656.75MB read
Requests/sec:  32360.46
Transfer/sec:    128.78MB
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/server -s stop
# 4 workers

apisix: 4 worker + 1 upstream + no plugin
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"prevNode":{"value":"{\"priority\":0,\"plugins\":{\"limit-count\":{\"time_window\":60,\"count\":2000000000000,\"rejected_code\":503,\"key\":\"remote_addr\",\"policy\":\"local\"},\"prometheus\":{}},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":7,"key":"\/apisix\/routes\/1","modifiedIndex":7},"action":"set"}
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   286.87us  174.65us   3.00ms   72.00%
    Req/Sec    28.68k     2.94k   35.21k    67.65%
  290908 requests in 5.10s, 1.13GB read
Requests/sec:  57042.36
Transfer/sec:    227.82MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   284.10us  177.81us   4.64ms   76.41%
    Req/Sec    28.94k     2.67k   33.68k    67.65%
  293746 requests in 5.10s, 1.15GB read
Requests/sec:  57598.31
Transfer/sec:    230.04MB
+ sleep 1
+ echo -e '\n\napisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)'


apisix: 4 worker + 1 upstream + 2 plugins (limit-count + prometheus)
+ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
    "uri": "/hello",
    "plugins": {
        "limit-count": {
            "count": 2000000000000,
            "time_window": 60,
            "rejected_code": 503,
            "key": "remote_addr"
        },
        "prometheus": {}
    },
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:80": 1
        }
    }
}'
{"node":{"value":{"priority":0,"plugins":{"limit-count":{"time_window":60,"count":2000000000000,"rejected_code":503,"key":"remote_addr","policy":"local"},"prometheus":{}},"upstream":{"nodes":{"127.0.0.1:80":1},"hash_on":"vars","type":"roundrobin"},"id":"1","uri":"\/hello"},"createdIndex":9,"key":"\/apisix\/routes\/1","modifiedIndex":9},"prevNode":{"value":"{\"priority\":0,\"plugins\":{},\"upstream\":{\"hash_on\":\"vars\",\"nodes\":{\"127.0.0.1:80\":1},\"type\":\"roundrobin\"},\"id\":\"1\",\"uri\":\"\\\/hello\"}","createdIndex":8,"key":"\/apisix\/routes\/1","modifiedIndex":8},"action":"set"}
+ sleep 3
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   342.74us  220.15us   5.38ms   75.84%
    Req/Sec    24.15k     2.44k   28.39k    72.55%
  245033 requests in 5.10s, 0.97GB read
Requests/sec:  48046.94
Transfer/sec:    195.20MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   352.29us  223.55us   3.37ms   70.11%
    Req/Sec    23.51k     2.47k   28.16k    69.61%
  238538 requests in 5.10s, 0.95GB read
Requests/sec:  46777.18
Transfer/sec:    190.04MB
+ sleep 1
+ make stop
/bin/openresty -p $PWD/ -c $PWD/conf/nginx.conf -s stop
+ echo -e '\n\nfake empty apisix server: 4 worker'


fake empty apisix server: 4 worker
+ sleep 1
+ sed -i 's/worker_processes [0-9]*/worker_processes 4/g' benchmark/fake-apisix/conf/nginx.conf
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   154.62us  104.55us   4.60ms   98.92%
    Req/Sec    51.77k     1.05k   53.74k    71.57%
  525323 requests in 5.10s, 2.04GB read
Requests/sec: 103004.49
Transfer/sec:    409.92MB
+ sleep 1
+ wrk -d 5 -c 16 http://127.0.0.1:9080/hello
Running 5s test @ http://127.0.0.1:9080/hello
  2 threads and 16 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   152.16us   63.87us   3.56ms   93.89%
    Req/Sec    51.84k   795.29    53.94k    69.61%
  525822 requests in 5.10s, 2.04GB read
Requests/sec: 103113.01
Transfer/sec:    410.35MB
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/fake-apisix -s stop
+ sudo openresty -p /home/wangys/incubator-apisix-master/benchmark/server -s stop

@membphis
Copy link
Author

@cboitel I think we can copy them to APISIX issue: https://github.com/apache/incubator-apisix/issues

Then I will submit a new PR about optimization.

@leeonfu
Copy link

leeonfu commented Mar 2, 2022

y

@Dhruv-Garg79
Copy link

@cboitel have you used any of these after your benchmarks? what is your recommendation for someone looking to adopt any of these?
I am interested in a simple nginx use case + auth for low latency and throughput up to 3000qps in the long run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment