Fórum Root.cz
Hlavní témata => Software => Téma založeno: OJ 04. 10. 2010, 00:45:58
-
Zdravim,
potreboval bych poradit. Koupil jsem si 3x WD20EARS a chtel bych je dat do SW raid5 (mdadm). Mate me tam ten 4k sector size. Mohl by nekdo sepsat tech par prikazu od naformatovani po nakonfigurovani pole?
Diky
-
zakladni prikaz je man mdadm ;)
ale jinak podle tohoto by jsi to mel dat dohromady.
http://wiki.archlinux.org/index.php/Installing_with_Software_RAID_or_LVM (http://wiki.archlinux.org/index.php/Installing_with_Software_RAID_or_LVM)
-
Ja to nedavno resil - na disku je jumper, kdyz je osazeny tak to znaci "XP compatible", v defaultu disk neni XP kompatibilni, ale ma 4K zarovnane sektory, coz je fajn. K tomu je potreba teda vyrobit partition table, kde nebude prvni particie na 63. sektoru - pouzijete proste nejnovejsi fdisk. Ten dela prvni partition se zarovnanim 1MB, tj 2048 sekt. Ve vysledku tedy budete mit takovouto pt:
# fdisk -lu /dev/sda
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb5706ecc
Device Boot Start End Blocks Id System
/dev/sda1 2048 3907029167 1953513560 fd Linux raid autodetect
Zbytek je pak standardni..
mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd{a,b,c}1
V pripade ext3/4 je pak vhodne jeste informovat FS ohledne nizsi vrstvy:
mkfs.ext3 -E stride=X,stripe=Y /dev/md0
Kde X=C/4K a Y=N*C/4K, v pripade 64K chunku to je pro 2+1 raid5 diskove pole X=16, Y=32. Opravte me pokud jsem nekde udelal chybu
-
My to řešíme tím, že rozdělujeme disk pomocí GPT (http://en.wikipedia.org/wiki/GUID_Partition_Table (http://en.wikipedia.org/wiki/GUID_Partition_Table)). Ta umí zrovnat oddíly na sektor (a ne na cylindr), takže první oddíl začíná typicky na 64 sektoru. Sektorem myslím 512B, protože disky o sobě pořád hlásí, že mají 512B sektory. Např.:
mklabel gpt
unit s
mkpart data 64 -1
...souhlasit s číslem sektoru pro -1 a ignorovat povídání o výkonu.
Nestalo se, že by se s tím nějaký Linux nepopral. Akorát pozor, pokud se má z těch disků bootovat, je potřeba udělat na začátku malý oddíl (dáváme 64 sektorů) s příznakem bios_grub, aby si měl GRUB kam zapsat stage1 data.
Na vytvořených oddílech už se dělat cokoliv, třeba i RAID (viz další příznaky oddílů v GPT). Pro zajímavost je dobré si zkusit zarovnat oddíly jinak, co to udělá s výkonem. Stojí to za to.
-
# fdisk -lu /dev/sda
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb5706ecc
Device Boot Start End Blocks Id System
/dev/sda1 2048 3907029167 1953513560 fd Linux raid autodetect
V pripade ext3/4 je pak vhodne jeste informovat FS ohledne nizsi vrstvy:
mkfs.ext3 -E stride=X,stripe=Y /dev/md0
Kde X=C/4K a Y=N*C/4K, v pripade 64K chunku to je pro 2+1 raid5 diskove pole X=16, Y=32. Opravte me pokud jsem nekde udelal chybu
Jak presne nastavit ten partion table pres ten fdisk s tim Linux raid autodetect systemem?
Proc zrovna 64k chunk size?
A jeste jedna vec. Pokud budu chtit pole v budoucnu expandovat, nebude to pak nejaky problem?
-
fdisk: t = type, FD, enter
zvetsovani: samotne MD jde zvetsit v pohode, ale bude to trvat neskutecne dlouho (u 3x na 4x 2T predpokladam den az dva). Pak je tam otazka filesystemu - v pripade ext3/4 to jde udelat online bez odmountovani. Davneji u ext3 bylo potreba nastavovat pres -E maximalni velikost. Kdyz pustite mkfs s -n, tak to vypise jak to zamysli vytvorit, je tam zminena i max. velikost (mozno to nebudete muset menit). Coz je ale blby, ze stripe/stride informace bude po zvetseni pole spatne a filesystem bude neskutecne pomaly. Reseni je zrejme jen format a nakopirovani znova.. v pripade rozsirovani pole v blizke budoucnosti bych volil pro mkfs jiz novejsi stride/stripe a smiril se s docasni pomalosti, nebo radeji zainvestoval a mel pole jiz finalni.
-
Tak disk jsem naformatoval disky a dal vytvorit pole. Nakonec jsem dokoupil jeste 2, abych nemusel pak pole rozsirovat.
# fdisk -lu
Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x6a341ba
Device Boot Start End Blocks Id System
/dev/sdc1 64 3907029167 1953514552 fd Linux raid autodetect
Bohuzel naskytl se mi problem a to v podobe synchronizace disku. Predtim byla rychlost neco kolem 40M/sec. Oprava: ted to zase vyskocilo na 20M, ale vzapeti to zaclo zase klesat. Nevite co s tim muze byt? A kam se vloudila chybicka?
Every 2,0s: cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hdd1[5] hdc1[3] sdd1[2] sdc1[1] sdb1[0]
7814057728 blocks level 5, chunk 64k, algorithm 2 [5/4] [UUUU_]
[>................] recovery = 4.5% (88981760/1953514432) finish=356068.4min speed=90K/sec
unused devices: <none>
-
Vidim tam sdb/c/d a hdc/d .. to jsou 2 radice? Ta rychlost je synchronizace, tj. pri 4.1 konfiguraci jakou mate to znamena ze 4x se to cte a 1x zapisuje.. cteni je tedy dohromady 80MB/s + 20MB/s zapis .. je to malo asi, takovy podprumer, chybu bych hledal v tom proc jsou nektere disky jako HDxx a ne SDxx.
Co se tyce rychlosti 90k/s... normalne ma prioritu system a rebuild je jaksi vedlejsi.. podivejte se zda nemate nahodou defaultni hodnoty tady:
cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
1000
a kdyztak zvyste min/max takto:
echo 50000 >/proc/sys/dev/raid/speed_limit_min
pro sledovani stavu pak odporucuji
watch -n 1 cat /proc/mdstat
po dokonceni synchronizace si zapnete bitmapy - urychli to rebuild pri nahodnem vypadku disku z pole.. vice viz: http://en.gentoo-wiki.com/wiki/RAID/Software
-
Takze pred chvili mi to u disku /dev/hdd napsalo failed. Kdyz ale projedu SMART kontrolu, vse je ok.
Dva radice to detekuje. Pouzivam zakladni desku Asus M4A78LT-M LE. Na teto desce je 6 SATA portu a 1 IDE. Az v biosu jsem prisel na to, ze to rozdeluje sata porty na 1-4 a 5-6.
Je nejaky problem pri pouziti dvou radicu?
Nevite nekdo jak nastavit delsi dobu pro parkovani hlavicek u techto disku?
-
Parkovani hlavicek: no to zrejme nejde.. ted jsem prochazel taky net a hledal reseni:
http://wdc.custhelp.com/cgi-bin/wdc.cfg/php/enduser/std_adp.php?p_faqid=5357
Pry nejde. Tak jsem zvedavy jak se k tomu postavi ALZA, kdyz budu rok co rok reklamovat disky.. jedine reseni by bylo si zrejme napsat demona, co v ramci 8-mi sekund k odparkovani udela seek na nahodnou pozici a prinuti tim disk neco delat...
-
Takze chyba se dvema radici byla ve spatne nastavenem biosu. Mel jsem tam nastaveno IDE misto AHCI.
Pak jsem zacal uplne od zacatku, Vse jsem nakonfiguroval znova. Rychlost rebuildingu byla kolem 70MB/s, Na 4% zacla klesat a na 4,5% se to zaseklo.
Nevite nekdo co s tim?
Every 1,0s: cat /proc/mdstat Tue Oct 5 21:38:38 2010
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[5](S) sde1[6](F) sdd1[2] sdc1[1] sdb1[0]
7814057728 blocks level 5, 64k chunk, algorithm 2 [5/3] [UUU__]
unused devices: <none>
mdadm -D /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Tue Oct 5 21:10:08 2010
Raid Level : raid5
Array Size : 7814057728 (7452.07 GiB 8001.60 GB)
Used Dev Size : 1953514432 (1863.02 GiB 2000.40 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Oct 5 21:34:36 2010
State : clean, degraded
Active Devices : 3
Working Devices : 4
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 1fd64447:dd0a35aa:01f9e43d:ac30fbff (local to host server)
Events : 0.11
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 0 0 3 removed
4 0 0 4 removed
5 8 81 - spare /dev/sdf1
6 8 65 - faulty spare /dev/sde1
smartctl -a /dev/sdf
smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF INFORMATION SECTION ===
Device Model: WDC WD20EARS-00S8B1
Serial Number: WD-WCAVY5013312
Firmware Version: 80.00A80
User Capacity: 2 000 398 934 016 bytes
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Tue Oct 5 21:40:08 2010 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (41100) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x3031) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 163 163 021 Pre-fail Always - 8816
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 21
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 11
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 19
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 15
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 330
194 Temperature_Celsius 0x0022 110 109 000 Old_age Always - 42
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
-
Pardon, nahodil jsem SMART vypis jineho disku.
# smartctl -a /dev/sde
smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF INFORMATION SECTION ===
Device Model: WDC WD20EARS-00S8B1
Serial Number: WD-WCAVY5044878
Firmware Version: 80.00A80
User Capacity: 2 000 398 934 016 bytes
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Tue Oct 5 21:47:44 2010 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (40260) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x3031) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 167 167 021 Pre-fail Always - 8608
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 21
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 11
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 19
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 16
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 101
194 Temperature_Celsius 0x0022 108 107 000 Old_age Always - 44
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 9 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
-
Muzes nam rict jake presne tam mas radice? (lspci | grep SATA), a pak jaka chyba nastala pri vyhozeni disku z pole? (nejspis dmesg, nebo pak syslog / kernel message log). Jinak to mas dost teply ty disky.. 44 stupnu na to ze je to green.. me 4 2TB ears maji 27-28 stupnu). Premyslej o aktivnim chlazeni pokud nechces brzo prijit o data..
-
Radic:
# lspci | grep SATA
00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode]
Dmesg vypis od nakonfigurovani pole po chybu:
[ 142.878744] md: bind<sdb1>
[ 143.339987] md: bind<sdc1>
[ 143.382873] md: bind<sdd1>
[ 143.872654] md: bind<sde1>
[ 144.318229] md: bind<sdf1>
[ 144.341220] async_tx: api initialized (async)
[ 144.408352] raid6: int64x1 2717 MB/s
[ 144.476352] raid6: int64x2 3657 MB/s
[ 144.544351] raid6: int64x4 2889 MB/s
[ 144.612359] raid6: int64x8 2599 MB/s
[ 144.680357] raid6: sse2x1 1437 MB/s
[ 144.748355] raid6: sse2x2 2913 MB/s
[ 144.816342] raid6: sse2x4 6499 MB/s
[ 144.816344] raid6: using algorithm sse2x4 (6499 MB/s)
[ 144.821525] xor: automatically using best checksumming function: generic_sse
[ 144.840350] generic_sse: 3144.000 MB/sec
[ 144.840356] xor: using function: generic_sse (3144.000 MB/sec)
[ 144.846265] md: raid6 personality registered for level 6
[ 144.846269] md: raid5 personality registered for level 5
[ 144.846271] md: raid4 personality registered for level 4
[ 144.846404] md/raid:md0: device sde1 operational as raid disk 3
[ 144.846408] md/raid:md0: device sdd1 operational as raid disk 2
[ 144.846411] md/raid:md0: device sdc1 operational as raid disk 1
[ 144.846414] md/raid:md0: device sdb1 operational as raid disk 0
[ 144.847149] md/raid:md0: allocated 5322kB
[ 144.847382] md/raid:md0: raid level 5 active with 4 out of 5 devices, algorithm 2
[ 144.847386] RAID conf printout:
[ 144.847389] --- level:5 rd:5 wd:4
[ 144.847392] disk 0, o:1, dev:sdb1
[ 144.847395] disk 1, o:1, dev:sdc1
[ 144.847397] disk 2, o:1, dev:sdd1
[ 144.847400] disk 3, o:1, dev:sde1
[ 144.847440] md0: detected capacity change from 0 to 8001595113472
[ 144.847792] md0:
[ 144.849362] RAID conf printout:
[ 144.849365] --- level:5 rd:5 wd:4
[ 144.849367] disk 0, o:1, dev:sdb1
[ 144.849369] disk 1, o:1, dev:sdc1
[ 144.849370] disk 2, o:1, dev:sdd1
[ 144.849372] disk 3, o:1, dev:sde1
[ 144.849373] disk 4, o:1, dev:sdf1
[ 144.849442] md: recovery of RAID array md0
[ 144.849443] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[ 144.849445] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 144.849450] md: using 128k window, over a total of 1953514432 blocks.
[ 144.849539] unknown partition table
[ 1569.471543] ata5.00: exception Emask 0x0 SAct 0xfd7 SErr 0x0 action 0x0
[ 1569.471552] ata5.00: irq_stat 0x40000008
[ 1569.471562] ata5.00: failed command: READ FPDMA QUEUED
[ 1569.471578] ata5.00: cmd 60/f8:30:48:ae:6e/00:00:0a:00:00/40 tag 6 ncq 126976 in
[ 1569.471581] res 41/40:00:d8:ae:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1569.471589] ata5.00: status: { DRDY ERR }
[ 1569.471594] ata5.00: error: { UNC }
[ 1569.484442] ata5.00: configured for UDMA/133
[ 1569.484483] ata5: EH complete
[ 1579.615135] ata5.00: exception Emask 0x0 SAct 0x1fff SErr 0x0 action 0x0
[ 1579.615146] ata5.00: irq_stat 0x40000008
[ 1579.615154] ata5.00: failed command: READ FPDMA QUEUED
[ 1579.615171] ata5.00: cmd 60/00:20:40:f0:6e/01:00:0a:00:00/40 tag 4 ncq 131072 in
[ 1579.615174] res 41/40:00:e0:f0:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1579.615182] ata5.00: status: { DRDY ERR }
[ 1579.615187] ata5.00: error: { UNC }
[ 1579.627820] ata5.00: configured for UDMA/133
[ 1579.627865] ata5: EH complete
[ 1584.681397] ata5.00: exception Emask 0x0 SAct 0x1ff7 SErr 0x0 action 0x0
[ 1584.681407] ata5.00: irq_stat 0x40000008
[ 1584.681416] ata5.00: failed command: READ FPDMA QUEUED
[ 1584.681433] ata5.00: cmd 60/00:48:40:fa:6e/01:00:0a:00:00/40 tag 9 ncq 131072 in
[ 1584.681436] res 41/40:00:58:fa:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1584.681444] ata5.00: status: { DRDY ERR }
[ 1584.681449] ata5.00: error: { UNC }
[ 1584.693484] ata5.00: configured for UDMA/133
[ 1584.693527] ata5: EH complete
[ 1587.625637] ata5.00: exception Emask 0x0 SAct 0x7ff SErr 0x0 action 0x0
[ 1587.625647] ata5.00: irq_stat 0x40000008
[ 1587.625655] ata5.00: failed command: READ FPDMA QUEUED
[ 1587.625672] ata5.00: cmd 60/00:18:40:fa:6e/01:00:0a:00:00/40 tag 3 ncq 131072 in
[ 1587.625675] res 41/40:00:58:fa:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1587.625683] ata5.00: status: { DRDY ERR }
[ 1587.625688] ata5.00: error: { UNC }
[ 1587.639206] ata5.00: configured for UDMA/133
[ 1587.639247] ata5: EH complete
[ 1590.558682] ata5.00: exception Emask 0x0 SAct 0xfe SErr 0x0 action 0x0
[ 1590.558691] ata5.00: irq_stat 0x40000008
[ 1590.558700] ata5.00: failed command: READ FPDMA QUEUED
[ 1590.558716] ata5.00: cmd 60/00:38:40:fa:6e/01:00:0a:00:00/40 tag 7 ncq 131072 in
[ 1590.558719] res 41/40:00:58:fa:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1590.558727] ata5.00: status: { DRDY ERR }
[ 1590.558732] ata5.00: error: { UNC }
[ 1590.570520] ata5.00: configured for UDMA/133
[ 1590.570556] ata5: EH complete
[ 1594.191752] ata5.00: exception Emask 0x0 SAct 0x7ff SErr 0x0 action 0x0
[ 1594.191761] ata5.00: irq_stat 0x40000008
[ 1594.191769] ata5.00: failed command: READ FPDMA QUEUED
[ 1594.191786] ata5.00: cmd 60/00:00:40:fa:6e/01:00:0a:00:00/40 tag 0 ncq 131072 in
[ 1594.191789] res 41/40:00:58:fa:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1594.191797] ata5.00: status: { DRDY ERR }
[ 1594.191802] ata5.00: error: { UNC }
[ 1594.204260] ata5.00: configured for UDMA/133
[ 1594.204302] ata5: EH complete
[ 1597.935800] ata5.00: exception Emask 0x0 SAct 0x7fd SErr 0x0 action 0x0
[ 1597.935809] ata5.00: irq_stat 0x40000008
[ 1597.935818] ata5.00: failed command: READ FPDMA QUEUED
[ 1597.935835] ata5.00: cmd 60/00:48:40:0c:6f/01:00:0a:00:00/40 tag 9 ncq 131072 in
[ 1597.935838] res 41/40:00:f8:0c:6f/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1597.935846] ata5.00: status: { DRDY ERR }
[ 1597.935851] ata5.00: error: { UNC }
[ 1597.947620] ata5.00: configured for UDMA/133
[ 1597.947661] ata5: EH complete
[ 1602.502098] ata5.00: exception Emask 0x0 SAct 0xfff SErr 0x0 action 0x0
[ 1602.502108] ata5.00: irq_stat 0x40000008
[ 1602.502116] ata5.00: failed command: READ FPDMA QUEUED
[ 1602.502133] ata5.00: cmd 60/00:00:40:fa:6e/01:00:0a:00:00/40 tag 0 ncq 131072 in
[ 1602.502136] res 41/40:00:58:fa:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1602.502144] ata5.00: status: { DRDY ERR }
[ 1602.502149] ata5.00: error: { UNC }
[ 1602.514902] ata5.00: configured for UDMA/133
[ 1602.514945] ata5: EH complete
[ 1605.446422] ata5.00: exception Emask 0x0 SAct 0x3fff SErr 0x0 action 0x0
[ 1605.446432] ata5.00: irq_stat 0x40000008
[ 1605.446441] ata5.00: failed command: READ FPDMA QUEUED
[ 1605.446458] ata5.00: cmd 60/00:58:40:fa:6e/01:00:0a:00:00/40 tag 11 ncq 131072 in
[ 1605.446461] res 41/40:00:58:fa:6e/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1605.446469] ata5.00: status: { DRDY ERR }
[ 1605.446474] ata5.00: error: { UNC }
[ 1605.459615] ata5.00: configured for UDMA/133
[ 1605.459685] sd 4:0:0:0: [sde] Unhandled sense code
[ 1605.459692] sd 4:0:0:0: [sde] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1605.459700] sd 4:0:0:0: [sde] Sense Key : Medium Error [current] [descriptor]
[ 1605.459710] Descriptor sense data with sense descriptors (in hex):
[ 1605.459715] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
[ 1605.459733] 0a 6e fa 58
[ 1605.459740] sd 4:0:0:0: [sde] Add. Sense: Unrecovered read error - auto reallocate failed
[ 1605.459751] sd 4:0:0:0: [sde] CDB: Read(10): 28 00 0a 6e fa 40 00 01 00 00
[ 1605.459768] end_request: I/O error, dev sde, sector 175045208
[ 1605.459779] md/raid:md0: read error not correctable (sector 175045144 on sde1).
[ 1605.459788] md/raid:md0: Disk failure on sde1, disabling device.
[ 1605.459792] <1>md/raid:md0: Operation continuing on 3 devices.
[ 1605.459807] md/raid:md0: read error not correctable (sector 175045152 on sde1).
[ 1605.459815] md/raid:md0: read error not correctable (sector 175045160 on sde1).
[ 1605.459823] md/raid:md0: read error not correctable (sector 175045168 on sde1).
[ 1605.459831] md/raid:md0: read error not correctable (sector 175045176 on sde1).
[ 1605.459839] md/raid:md0: read error not correctable (sector 175045184 on sde1).
[ 1605.459846] md/raid:md0: read error not correctable (sector 175045192 on sde1).
[ 1605.459854] md/raid:md0: read error not correctable (sector 175045200 on sde1).
[ 1605.459862] md/raid:md0: read error not correctable (sector 175045208 on sde1).
[ 1605.459870] md/raid:md0: read error not correctable (sector 175045216 on sde1).
[ 1605.459918] ata5: EH complete
[ 1605.884348] md: md0: recovery done.
[ 1609.001619] ata5.00: exception Emask 0x0 SAct 0x18cf SErr 0x0 action 0x0
[ 1609.001628] ata5.00: irq_stat 0x40000008
[ 1609.001638] ata5.00: failed command: READ FPDMA QUEUED
[ 1609.001654] ata5.00: cmd 60/08:58:40:16:6f/00:00:0a:00:00/40 tag 11 ncq 4096 in
[ 1609.001657] res 41/40:00:40:16:6f/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1609.001665] ata5.00: status: { DRDY ERR }
[ 1609.001670] ata5.00: error: { UNC }
[ 1609.015869] ata5.00: configured for UDMA/133
[ 1609.015905] ata5: EH complete
[ 1611.756946] ata5.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0
[ 1611.756956] ata5.00: irq_stat 0x40000008
[ 1611.756965] ata5.00: failed command: READ FPDMA QUEUED
[ 1611.756982] ata5.00: cmd 60/08:08:40:16:6f/00:00:0a:00:00/40 tag 1 ncq 4096 in
[ 1611.756985] res 41/40:00:40:16:6f/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1611.756993] ata5.00: status: { DRDY ERR }
[ 1611.756998] ata5.00: error: { UNC }
[ 1611.769433] ata5.00: configured for UDMA/133
[ 1611.769469] ata5: EH complete
[ 1614.512251] ata5.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0
[ 1614.512255] ata5.00: irq_stat 0x40000008
[ 1614.512258] ata5.00: failed command: READ FPDMA QUEUED
[ 1614.512263] ata5.00: cmd 60/08:30:40:16:6f/00:00:0a:00:00/40 tag 6 ncq 4096 in
[ 1614.512264] res 41/40:00:40:16:6f/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1614.512266] ata5.00: status: { DRDY ERR }
[ 1614.512267] ata5.00: error: { UNC }
[ 1614.524991] ata5.00: configured for UDMA/133
[ 1614.525027] ata5: EH complete
[ 1617.267568] ata5.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0
[ 1617.267571] ata5.00: irq_stat 0x40000008
[ 1617.267574] ata5.00: failed command: READ FPDMA QUEUED
[ 1617.267578] ata5.00: cmd 60/08:08:40:16:6f/00:00:0a:00:00/40 tag 1 ncq 4096 in
[ 1617.267579] res 41/40:00:40:16:6f/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1617.267582] ata5.00: status: { DRDY ERR }
[ 1617.267583] ata5.00: error: { UNC }
[ 1617.279426] ata5.00: configured for UDMA/133
[ 1617.279442] ata5: EH complete
[ 1620.022984] ata5.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0
[ 1620.022994] ata5.00: irq_stat 0x40000008
[ 1620.023002] ata5.00: failed command: READ FPDMA QUEUED
[ 1620.023019] ata5.00: cmd 60/08:30:40:16:6f/00:00:0a:00:00/40 tag 6 ncq 4096 in
[ 1620.023022] res 41/40:00:40:16:6f/00:00:0a:00:00/40 Emask 0x409 (media error) <F>
[ 1620.023030] ata5.00: status: { DRDY ERR }
[ 1620.023035] ata5.00: error: { UNC }
[ 1620.036039] ata5.00: configured for UDMA/133
[ 1620.036081] sd 4:0:0:0: [sde] Unhandled sense code
[ 1620.036087] sd 4:0:0:0: [sde] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1620.036095] sd 4:0:0:0: [sde] Sense Key : Medium Error [current] [descriptor]
[ 1620.036105] Descriptor sense data with sense descriptors (in hex):
[ 1620.036110] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
[ 1620.036127] 0a 6f 16 40
[ 1620.036134] sd 4:0:0:0: [sde] Add. Sense: Unrecovered read error - auto reallocate failed
[ 1620.036145] sd 4:0:0:0: [sde] CDB: Read(10): 28 00 0a 6f 16 40 00 00 08 00
[ 1620.036161] end_request: I/O error, dev sde, sector 175052352
[ 1620.036172] raid5_end_read_request: 19 callbacks suppressed
[ 1620.036179] md/raid:md0: read error not correctable (sector 175052288 on sde1).
[ 1620.036212] ata5: EH complete
[ 1620.528684] RAID conf printout:
[ 1620.528693] --- level:5 rd:5 wd:3
[ 1620.528700] disk 0, o:1, dev:sdb1
[ 1620.528706] disk 1, o:1, dev:sdc1
[ 1620.528711] disk 2, o:1, dev:sdd1
[ 1620.528715] disk 3, o:0, dev:sde1
[ 1620.528719] disk 4, o:1, dev:sdf1
[ 1620.540379] RAID conf printout:
[ 1620.540388] --- level:5 rd:5 wd:3
[ 1620.540395] disk 0, o:1, dev:sdb1
[ 1620.540400] disk 1, o:1, dev:sdc1
[ 1620.540405] disk 2, o:1, dev:sdd1
[ 1620.540409] disk 3, o:0, dev:sde1
[ 1620.540423] RAID conf printout:
[ 1620.540427] --- level:5 rd:5 wd:3
[ 1620.540431] disk 0, o:1, dev:sdb1
[ 1620.540435] disk 1, o:1, dev:sdc1
[ 1620.540439] disk 2, o:1, dev:sdd1
[ 1620.540444] disk 3, o:0, dev:sde1
[ 1620.548399] RAID conf printout:
[ 1620.548403] --- level:5 rd:5 wd:3
[ 1620.548405] disk 0, o:1, dev:sdb1
[ 1620.548407] disk 1, o:1, dev:sdc1
[ 1620.548408] disk 2, o:1, dev:sdd1
Tu teplotu vyresim po nakonfigurovani pole. Chci pak teplotu pod 30˚C. Navic server je ted v pokoji a casem pujde do sklepa.
-
Ahoj, koukam ze to hlasi chybu cteni .. kdyby to byla pravda, tak ten disk je poskozenej. Mam par tipu jak to zkusit, zda to neni radicem:
1) vyzkouset s jinyma sata kabely na uplne jinem pc (idealne nestejny chipset)
2) na svem pc pustit nejprve za sebou a pak paralelne (nebo rovnou paralelne):
dd if=/dev/sdb of=/dev/null bs=1M (v man dd se doctete jak zobrazit stav pres kill sigusr)
Pokud to pri pouziti 1 disku neudela chybu na stejne pozici (cislo sektoru mate v logu) tak bych vinil radic/kabely.. PS. na serveru diit.cz v jedne diskuzi nekdo nadava ze 2T straci data.. ale vice detailu nesdelil :)
-
Ted jsem zkousel vymenit SATA kabely a zase neuspesne. Chyba se vyskytla opet na disku /dev/sde. Ale na sektoru 175 066 864, coz se lisi proti puvodnimu o cca 21k sektoru. Vecer vyzkousim ten copy test. Test na jine zakladni desce nejdrive zitra vecer.
-
Tak chybu nejspis zpusobuje disk. Soubezne jsem pustil
dd if=/dev/sdX of=/dev/null bs=1M
na vsech discich a u /dev/sde (ten co byl dycky failed pri rebuildingu pole)
Poprvé:
dd if=/dev/sde of=/dev/null bs=1M
dd: čtení ?/dev/sde?: Chyba vstupu/výstupu
5+1 vstoupivších záznamů
5+1 vystoupivších záznamů
5 767 168 bajtů (5,8 MB) zkopírováno, 90,6987 s, 63,6 kB/s
Podruhé:
dd if=/dev/sde of=/dev/null bs=1M
dd: čtení ?/dev/sde?: Chyba vstupu/výstupu
0+0 vstoupivších záznamů
0+0 vystoupivších záznamů
0 bajtů (0 B) zkopírováno, 0,123431 s, 0,0 kB/s
Poté jsem pc restartoval a linux mi zacal hazet chybu s nepripravenosti disku a pokousel se o soft restart. Pak jsem v biosu nastavil IDE mod. V AHCI bios nevidi sata disky na 5. a 6. portu (viditelne jen v systemu). No a od te doby disk uz se mi nenacetl ani v biosu. Zkusil jsem 3 ruzne kabely a porty a nic.
Po reklamaci dam vedet :)
-
Rad bych prispel svou troskou do mlyna, protoze jsem si disky WD Green (EARS i EADS, ale jen 1.5T) a SW RAID 5 celkem uzil. Padlo tady nekolik nepresnych informaci + dulezite informace chybi:
0. Doporucuju se diskum WD z rady Green uplne vyhnout.
1. Parkovani hlavicek jde vypnout pomoci utility wdidle3.exe, k nalezeni na strankach western digitalu. Muzou tam psat, ze to pro tento typ disku neni urcene, ale funguje to. Je nutne spoustet z FreeDOSu.
Disky, ktere provozujeme mene nez rok tohle vypnute nemeli a maji zaparkovanu jiz vice nez 250tisic krat...
Tady bych mel i dotaz - jak to, ze dochazi k parkovani, kdyz commit interval filesystemu je 5s a parkovani ma byt az po 8s necinosti?
2. Je treba zarovnat partition table tak, aby partitiony (ne jen prvni, ale i pripadne dalsi) zacinaly na cislu sektoru delitelnem 8mi (tj. na 4K hranici).
3. Je potreba vyrabet RAID s formatem metadat 1.0 (volba -e u mdadm), ktery uklada metadata na konci partition. Velikost metadat neni delitelna 4k, takze pri jejich ulozeni na zacatku partitiony dojde k posunuti zacatku vyuzitelneho prostoru, cimz se zarovnani ztraci! V zavislosti na verzi mdadm to muze a nemusi byt vychozi volba!
4. To same plati i pro LVM, je treba si dat pozor, at zarovnani neporusi! To nepouzivam, takze konkretne nevim, ale byl bych velmi opatrny.
-
2. Je treba zarovnat partition table tak, aby partitiony (ne jen prvni, ale i pripadne dalsi) zacinaly na cislu sektoru delitelnem 8mi (tj. na 4K hranici).
Tzn. jestli to chapu spravne, ze prvni sector ma byt umisten na pozici 4096? Nebo na libovolne pozici zacinajici na 64,128,...
Dekuji za rady. Jen bych se chtel zeptat, zdali vam slo parkovani pomoci wdidle3.exe uplne vypnout? Kdyz jsem pouzil atribut /D nastavila mi utilita cas na parkovani 6300ms. Takze jsem radsi manualne nastavil cas na 25 500ms.