Prihodim sem tu konfiguraci z toho proxmox fora cos sem pastnul v dotazu.
HW: 2x HP DL380p G8, every node has 2x E5-2690 2.9GHz, 96GB RAM, SmartArray P430, 1x Intel S4610 2TB, 1x Kingston DC500m, 10Gbps optical. PVE is installed on dedicated ssds. Ceph OSDs are as raid0 no P430 - 1x ssd per bluestore OSD. So 4x OSDs total on 2 nodes 2/2 replica.
Now, some ceph fio tests from VM (Debian10 defaults), all VMs on same host:
1x VM: read 28k, write 10k, readwrite 14k/5k iops <-- thats acceptalbe
1x VM (nfs server) + 1x VM (nfs client): read 11.1k, write 0.8k, readwrite 1.8k/0.6k iops (from client) <-- this looks very bad
For example drbd (VMs as raw files):
1x VM: read 45k, write 21k, readwrite 28/10 <-- thats semiexpected
1x VM (nfs server) + 1x VM (nfs client): read 31k, write 4.8k, readwrite 11.6/3.8k (from client) <-- thats acceptable
No snad sem nenapisu nejakou blbost tedka, ale "zkus to po jednom" s tim fio.
Muj test plan by byl asi nasledujici (abych mel co porovnavat):
Jakej mas r/w kdyz pustis primo na hostu, na ten disk kde mas ty VM image bo jestli to mas pres LVM? Ciste jako kdyz bys to bezel na notesu kdyz si testujes novej disk treba.
Jakej mas r/w kdyz pustis to samy, ale na guestech na root partition? Zkus to na obou.
Jakej mas r/w kdyz pustits fio na guestech, ale do te primountene partition (resp. tam kde mas ten nfs export / data)?
Zkus udelat nejakej 4+gb file a skopci ho pres rsync z "guest server" na "guest client", zkus to samy i pres scp at vidis pripadny rozdil (rsync -e "ssh ...), vynech kompresi pro rsync i pro ssh.
Mountni si to nfsko / ceph a zkus skopcit ten 4gb file jeste jednou. Jestli tam budes mit fakt znatelny rozdily, je jasny, ze to je konfigurace nfs/ceph popr. sysctl net, dirty pages ....
Proc tohle pisu? Kdyz nainstalujes defaultni apache2 a defaultni nginx, tak nginx je v defaultu nasobne rychlejsi. Neboli apache je na houby, no je protoze v defaultu pouziva worker a ne event. Kdyz to zmenis, je to srovnatelne s nginx.
Dalsi vec, ktera se muze dost podepsat na iopsech je nastaveni v kvm/qemu. Jedine virtio a pak si muzes pohrat s cachovanim/zapisem v qemu, dela to opravdu hodne. Treba ty 4k sektory co tam mas se mi moc nelibi, delalo mi to fakt bordel, ale ve vm byly widle. Dalsi veci muze byt sitovka pro vm (zase virtio).
Jako referenci prikladam:
https://www.slashroot.in/how-do-linux-nfs-performance-tuning-and-optimizationhttps://www.cyberciti.biz/faq/linux-unix-tuning-nfs-server-client-performance/https://cromwell-intl.com/open-source/performance-tuning/nfs.htmlA jeste jedna vec, co by te mohla potrapit:
https://cromwell-intl.com/open-source/performance-tuning/disks.html