On 04/04/2014 01:17 PM, CS_DBA wrote:
Hi All;

We have a server running Scientific Linux 6.4 with a large 4.1TB RAID 10
volume

We keep all of our KVM's on this volume

Today we found that we are unable to access the VM's

After some checking we found errors such as the ones below in
/var/log/message

We unmounted the RAID volume (umount /dev/md0) thinking we could run
some diagnostics and maybe fix it, however I'm less than up to speed on
managing RAID volumes...

Can someone help me debug this? How do I run a "check" on a RAID volume?

cat /proc/mdstat

will tell you the state of the array(s).  Run that and report back.

The errors in the log imply the drive or controller ata5.01 has problems.

The EXT4-fs error implies the underlying device (md0) cannot be read.

Remember the "0" in RAID10 means that both of the underlying RAID1 arrays need to be clean or you won't be able to use the RAID0. Unfortunately this also implies that one of your RAID1 arrays is unreadable.

Jeff

Reply via email to