Rodney D. Myers a écrit :
> 
> Did the above, 1.5 hours ago, was working diligently until a few
> moments ago.
> 
> now this is what I see;
> 
> /sbin/mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Wed Jun 25 16:03:44 2014
>      Raid Level : raid1
>      Array Size : 976630464 (931.39 GiB 1000.07 GB)
>   Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
>    Raid Devices : 3
>   Total Devices : 3
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Wed Jun 25 17:40:37 2014
>           State : active, degraded, resyncing 
>  Active Devices : 1
> Working Devices : 1
>  Failed Devices : 2
>   Spare Devices : 0
> 
>   Resync Status : 6% complete
> 
>            Name : riverside:0  (local to host riverside)
>            UUID : 22da3cb6:9c3b1aa0:8c8ba2c9:6c3cf76d
>          Events : 1145
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       33        0      active sync   /dev/sdc1
>        1       8       49        1      faulty   /dev/sdd1
>        2       8       65        2      faulty   /dev/sde1

Any messages in the kernel log related to the disks or md which could
explain why the devices become faulty ?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53ab44b0.1070...@plouf.fr.eu.org

Reply via email to