Hello-- I was wondering if perhaps someone could help me. In a rush, I turned off my machine a few seconds before it was done, and now my SYSLOG states: === Mar 9 18:47:07 ingress kernel: (read) sda1's sb offset: 4441856 [events: 00000033] Mar 9 18:47:07 ingress kernel: (read) sdb1's sb offset: 4441856 [events: 00000032] Mar 9 18:47:07 ingress kernel: (read) sdc1's sb offset: 4441856 [events: 00000032] Mar 9 18:47:07 ingress kernel: (read) sdd1's sb offset: 4441856 [events: 00000031] Mar 9 18:47:07 ingress kernel: autorun ... Mar 9 18:47:07 ingress kernel: considering sdd1 ... Mar 9 18:47:07 ingress kernel: adding sdd1 ... Mar 9 18:47:07 ingress kernel: adding sdc1 ... Mar 9 18:47:07 ingress kernel: adding sdb1 ... Mar 9 18:47:07 ingress kernel: adding sda1 ... Mar 9 18:47:07 ingress kernel: created md0 Mar 9 18:47:07 ingress kernel: bind<sda1,1> Mar 9 18:47:07 ingress kernel: bind<sdb1,2> Mar 9 18:47:07 ingress kernel: bind<sdc1,3> Mar 9 18:47:07 ingress kernel: bind<sdd1,4> Mar 9 18:47:07 ingress kernel: running: <sdd1><sdc1><sdb1><sda1> Mar 9 18:47:07 ingress kernel: now! Mar 9 18:47:07 ingress kernel: sdd1's event counter: 00000031 Mar 9 18:47:07 ingress kernel: sdc1's event counter: 00000032 Mar 9 18:47:07 ingress kernel: sdb1's event counter: 00000032 Mar 9 18:47:07 ingress kernel: sda1's event counter: 00000033 Mar 9 18:47:07 ingress kernel: md: superblock update time inconsistency -- using the most recent one Mar 9 18:47:07 ingress kernel: freshest: sda1 Mar 9 18:47:07 ingress kernel: md: kicking non-fresh sdd1 from array! Mar 9 18:47:07 ingress kernel: unbind<sdd1,3> Mar 9 18:47:07 ingress kernel: export_rdev(sdd1) Mar 9 18:47:07 ingress kernel: md0: kicking faulty sdc1! Mar 9 18:47:07 ingress kernel: unbind<sdc1,2> Mar 9 18:47:08 ingress kernel: export_rdev(sdc1) Mar 9 18:47:08 ingress kernel: md0: removing former faulty sdd1! Mar 9 18:47:08 ingress kernel: md: md0: raid array is not clean -- starting background reconstruction Mar 9 18:47:08 ingress kernel: raid5 personality registered Mar 9 18:47:08 ingress kernel: md0: max total readahead window set to 6144k Mar 9 18:47:08 ingress kernel: md0: 3 data-disks, max readahead per data-disk: 2048k Mar 9 18:47:08 ingress kernel: raid5: device sdb1 operational as raid disk 1 Mar 9 18:47:08 ingress kernel: raid5: device sda1 operational as raid disk 0 Mar 9 18:47:08 ingress kernel: raid5: not enough operational devices for md0 (2/4 failed) Mar 9 18:47:08 ingress kernel: RAID5 conf printout: Mar 9 18:47:08 ingress kernel: --- rd:4 wd:2 fd:2 Mar 9 18:47:08 ingress kernel: disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 Mar 9 18:47:08 ingress kernel: disk 1, s:0, o:1, n:1 rd:1 us:1 dev:sdb1 Mar 9 18:47:08 ingress kernel: disk 2, s:0, o:0, n:2 rd:2 us:1 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] Mar 9 18:47:08 ingress kernel: raid5: failed to run raid set md0 Mar 9 18:47:08 ingress kernel: pers->run() failed ... Mar 9 18:47:08 ingress kernel: do_md_run() returned -22 Mar 9 18:47:08 ingress kernel: unbind<sdb1,1> Mar 9 18:47:08 ingress kernel: export_rdev(sdb1) Mar 9 18:47:08 ingress kernel: unbind<sda1,0> Mar 9 18:47:08 ingress kernel: export_rdev(sda1) Mar 9 18:47:08 ingress kernel: md0 stopped. Mar 9 18:47:08 ingress kernel: ... autorun DONE. Mar 9 18:47:08 ingress kernel: Bad md_map in ll_rw_block Mar 9 18:47:08 ingress kernel: EXT2-fs: unable to read superblock === I'm running 0.90 under RedHat 6.0, using 4 4GB drives using RAID5. Is there a way to force it to work? I need to get to the data on these drives. Thanks for any assistance in this matter. -- Eric Y. Theriault