@jjrsk:
NFS je neskutecne pomale samo o sobe, takze bych se vubec nedivil, pokud by to bylo proste tim. Sveho casu sem si testoval ruzne varianty sitovych disku, a jako jediny realne pouzitelny se mi jevil smb.
hele a co z toho SMB (čísla) dostaneš?
Jako obecně NFS má své mouchy, ale SMB jsem vnímal jako
nenativní protokol, do které ho se jde až v momentě, kdy máš v síti win klienty..
..ať se nebavíme obecně.. recentní hw, 10G síť, na remote obyč (server) nvme/ssd, nfs (4.2) víceméně v default na Rocky8:
[root@n2 test_mount]# fio --filename=./testhere --direct=1 --rw=read --bs=1m --size=20G --numjobs=200 --runtime=60 --group_reporting --name=file1
file1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
...
fio-3.19
Starting 200 processes
file1: Laying out IO file (1 file / 20480MiB)
Jobs: 200 (f=200): [R(200)][100.0%][r=1119MiB/s][r=1118 IOPS][eta 00m:00s]
file1: (groupid=0, jobs=200): err= 0: pid=5133: Fri Jun 2 11:58:50 2023
read: IOPS=1118, BW=1118MiB/s (1173MB/s)(65.7GiB/60179msec)
clat (msec): min=3, max=186, avg=178.46, stdev= 6.80
lat (msec): min=3, max=186, avg=178.46, stdev= 6.80
clat percentiles (msec):
| 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 178], 20.00th=[ 178],
| 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 180], 60.00th=[ 180],
| 70.00th=[ 180], 80.00th=[ 180], 90.00th=[ 182], 95.00th=[ 182],
| 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 186],
| 99.99th=[ 186]
bw ( MiB/s): min= 781, max= 1282, per=100.00%, avg=1119.66, stdev= 0.81, samples=23824
iops : min= 719, max= 1271, avg=1117.37, stdev= 0.82, samples=23824
lat (msec) : 4=0.01%, 10=0.02%, 20=0.03%, 50=0.07%, 100=0.11%
lat (msec) : 250=99.77%
cpu : usr=0.00%, sys=0.05%, ctx=68330, majf=2, minf=55740
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=67292,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=1118MiB/s (1173MB/s), 1118MiB/s-1118MiB/s (1173MB/s-1173MB/s), io=65.7GiB (70.6GB), run=60179-60179msec
[root@n2 test_mount]#