No ja viem ze sa pytam uplne sprosto ale ... neviem si predstavit, kolko nejaky web o nejakom pocte uzivatelov za den spravi na nejakom serveri zataze, to sa proste neda nikde "nasimulovat / vypocitat".
A nechajme web webom ale zoberme si len cisto ssh konto niekde. Ja mam spravene konto na devio.us (openBSD konto zadarmo) a ked sa pozriem na vypisy zbezne tak:
$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0a 9.8G 750M 8.6G 8% /
/dev/wd0d 2.0G 7.3M 1.9G 0% /tmp
/dev/wd0e 19.7G 7.8G 10.9G 42% /usr
/dev/wd0f 39.4G 27.8G 9.6G 74% /var
/dev/wd1a 293G 97.9G 181G 35% /home
load averages: 2.68, 2.54, 2.57
340 processes: 1 running, 331 idle, 4 zombie, 4 on processor
CPU0 states: 16.5% user, 0.4% nice, 6.1% system, 0.7% interrupt, 76.3% idle
CPU1 states: 30.9% user, 0.5% nice, 7.3% system, 0.0% interrupt, 61.3% idle
CPU2 states: 28.5% user, 0.5% nice, 6.9% system, 0.0% interrupt, 64.1% idle
CPU3 states: 29.4% user, 0.5% nice, 6.7% system, 0.0% interrupt, 63.4% idle
Memory: Real: 524M/842M act/tot Free: 1164M Swap: 0K/2055M used/tot
who | wc -l
31
$ ls /home/ | wc -l
5782
$ sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'
hw.machine=i386
hw.model=Intel(R) Xeon(TM) CPU 2.80GHz ("GenuineIntel" 686-class)
hw.ncpu=4
hw.ncpufound=4
Takze, na to ze tam je 30 users online a cca 6000 ludi tam ma ucty a zo skoro 300 giga je len 100 giga vyuzitych (!!!) tak to je teda celkom v pohodicke ne? Ked by som tam dal 5TB a nejaky lepsi procak, tak to to zelezo musi utiahnut proste ...
Ine to je s webmi ale to si vobec neviem predstavit, ze aku zataz to na takom serveri robi, ako sa potom robi normalny webhosting u niekoho kto manageuje tisic webov? Ake na to treba parametre? Ja viem ze sa to neutiahne s jednym serverom, to je jasna paka, ale kolko teda webov je dedikovanych ja jeden fyzicky server? Radovo? 10? Nemam o tom vobec predstavu