hi,

there are some web servers that nfs mount their document roots from this
raid 5 array.

2 hours before it was still working :(

the array is run under 2.2.6-ac1.

desperately yours,

markus

Jul 14 19:17:42 raid kernel: (read) sdb1's sb offset: 8883840 [events:
00000005]
Jul 14 19:17:42 raid kernel: (read) sdc1's sb offset: 8883840 [events:
00000005]
Jul 14 19:17:42 raid kernel: (read) sde1's sb offset: 8883840 [events:
00000004]
Jul 14 19:17:42 raid kernel: (read) sdd1's sb offset: 8883840 [events:
00000003]
Jul 14 19:17:42 raid kernel: autorun ...
Jul 14 19:17:42 raid kernel: considering sdd1 ...
Jul 14 19:17:42 raid kernel:   adding sdd1 ...
Jul 14 19:17:42 raid kernel:   adding sde1 ...
Jul 14 19:17:42 raid kernel:   adding sdc1 ...
Jul 14 19:17:42 raid kernel:   adding sdb1 ...
Jul 14 19:17:42 raid kernel: created md0
Jul 14 19:17:42 raid kernel: bind<sdb1,1>
Jul 14 19:17:42 raid kernel: bind<sdc1,2>
Jul 14 19:17:42 raid kernel: bind<sde1,3>
Jul 14 19:17:42 raid kernel: bind<sdd1,4>
Jul 14 19:17:42 raid kernel: running: <sdd1><sde1><sdc1><sdb1>
Jul 14 19:17:42 raid kernel: now!
Jul 14 19:17:42 raid kernel: sdd1's event counter: 00000003
Jul 14 19:17:42 raid kernel: sde1's event counter: 00000004
Jul 14 19:17:42 raid kernel: sdc1's event counter: 00000005
Jul 14 19:17:42 raid kernel: sdb1's event counter: 00000005
Jul 14 19:17:42 raid kernel: md: superblock update time inconsistency --
using the most recent one
Jul 14 19:17:42 raid kernel: freshest: sdc1
Jul 14 19:17:42 raid kernel: md: kicking non-fresh sdd1 from array!
Jul 14 19:17:42 raid kernel: unbind<sdd1,3>
Jul 14 19:17:42 raid kernel: export_rdev(sdd1)
Jul 14 19:17:42 raid kernel: md0: kicking faulty sde1!
Jul 14 19:17:42 raid kernel: unbind<sde1,2>
Jul 14 19:17:42 raid kernel: export_rdev(sde1)
Jul 14 19:17:42 raid kernel: md0: removing former faulty sdd1!
Jul 14 19:17:42 raid kernel: md: md0: raid array is not clean --
starting background reconstruction
Jul 14 19:17:42 raid kernel: md0: max total readahead window set to 768k
Jul 14 19:17:42 raid kernel: md0: 3 data-disks, max readahead per
data-disk: 256k
Jul 14 19:17:42 raid kernel: raid5: device sdc1 operational as raid disk
1
Jul 14 19:17:42 raid kernel: raid5: device sdb1 operational as raid disk
0
Jul 14 19:17:42 raid kernel: raid5: not enough operational devices for
md0 (2/4 failed)
Jul 14 19:17:42 raid kernel: RAID5 conf printout:
Jul 14 19:17:42 raid kernel:  --- rd:4 wd:2 fd:2
Jul 14 19:17:42 raid kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sdb1
Jul 14 19:17:42 raid kernel:  disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sdc1
Jul 14 19:17:42 raid kernel:  disk 2, s:0, o:0, n:3 rd:2 us:1 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 3, s:0, o:0, n:2 rd:3 us:1 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Jul 14 19:17:42 raid kernel: raid5: failed to run raid set md0
Jul 14 19:17:42 raid kernel: pers->run() failed ...
Jul 14 19:17:42 raid kernel: do_md_run() returned -22
Jul 14 19:17:42 raid kernel: unbind<sdc1,1>
Jul 14 19:17:42 raid kernel: export_rdev(sdc1)
Jul 14 19:17:42 raid kernel: unbind<sdb1,0>
Jul 14 19:17:42 raid kernel: export_rdev(sdb1)
Jul 14 19:17:42 raid kernel: md0 stopped.
Jul 14 19:17:42 raid kernel: ... autorun DONE.
-- 
Digital Online Media GmbH
(we like to be called DOM)

http://www.dom.de
mailto: [EMAIL PROTECTED]
phone: +49 221 951680
fax: +49 221 951688

Reply via email to