You can test I/O disk performance via the following commands:
cd /mount/point/of/disk/to/test
# Test read performance
fio --name randread --fsync=1 --direct=1 --rw=randread --blocksize=4k --numjobs=8 --size=512M --time_based --runtime=60 --group_reporting
# Test write performance
fio --name randwrite --fsync=1 --direct=1 --rw=randwrite --blocksize=4k --numjobs=8 --size=512M --time_based --runtime=60 --group_reporting
fio is found on most Linux distributions. Important outputs are just two numbers: iops and bw, which you find from this line:
read : io=731600KB, bw=12193KB/s, iops=3048, runt= 60003msec
From the complete output:
randread: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
...
fio-2.14
Starting 8 processes
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 8 (f=8): [r(8)] [100.0% done] [12000KB/0KB/0KB /s] [3000/0/0 iops] [eta 00m:00s]
randread: (groupid=0, jobs=8): err= 0: pid=4211: Mon Jan 16 12:04:27 2023
read : io=731600KB, bw=12193KB/s, iops=3048, runt= 60003msec
clat (usec): min=312, max=11614, avg=2623.63, stdev=547.75
lat (usec): min=312, max=11614, avg=2623.70, stdev=547.75
clat percentiles (usec):
| 1.00th=[ 556], 5.00th=[ 1720], 10.00th=[ 2064], 20.00th=[ 2384],
| 30.00th=[ 2512], 40.00th=[ 2608], 50.00th=[ 2672], 60.00th=[ 2704],
| 70.00th=[ 2768], 80.00th=[ 2896], 90.00th=[ 3184], 95.00th=[ 3440],
| 99.00th=[ 4016], 99.50th=[ 4320], 99.90th=[ 5280], 99.95th=[ 5792],
| 99.99th=[ 7712]
lat (usec) : 500=0.72%, 750=1.05%, 1000=0.46%
lat (msec) : 2=6.52%, 4=90.18%, 10=1.07%, 20=0.01%
cpu : usr=0.11%, sys=0.30%, ctx=182912, majf=0, minf=73
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=182900/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=731600KB, aggrb=12192KB/s, minb=12192KB/s, maxb=12192KB/s, mint=60003msec, maxt=60003msec
Some reference numbers below:
READ bw | READ iops | WRITE bw | WRITE iops | |
---|---|---|---|---|
AWS network storage (EC2 EBS) | 12 MB/s | 3k iops | 8 MB/s | 2 kiops |
AWS directly-attached SSD (example from m6id.4xlarge) | 382 MB/s | 95k iops | 436 MB/s | 109 kiops |
2017 Lenovo laptop with local SSDs | 327 MB/s | 79k iops | 8 MB/a | 2 kiops |