> on the /dev/md2 raid volume, the /dev/sdb3 partition keeps getting
> marked as failed. This happens consistently after about 1.5-2 days of use.
> /dev/md2 is where I mount /usr/local on this system, which houses apache
> and openldap, so I don;t think it is a cron job, or burst of activity that
Do you see anything in the log? SCSI parity error, for example?
With three identical drives on the same controller, that rules out a lot
of issues that could arise. If you are seeing the same drive being marked
bad over and over, and it takes days for that to happen, I would look at
(1) termina
Hi;
This one's a bit long-winded, so skip if you are not into a good
read...
I am having some problems with my raid5 system:
2.2.14 with raid-2.2.14-b1 from
people.redhat.com/mingo/raid-patches
tekram 390uw with 3 Seagate Medalist St391