Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2016-03-03 Thread Sandro Tosi
(resending now that the bug is unarchived, sorry for duplicates)

this bug was fixed in 3.3.4-1.1 but the 3.4-1 upload didnt acknowledge
the NMU, so the patch is lost (it is not present in
https://anonscm.debian.org/cgit/pkg-mdadm/mdadm.git/tree/debian/patches)
- please either integrate the patch from the NMU or implement a
different solution with the same result.

this is, and remains, a critical bug.

Thanks for maintaining mdadm!



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-10-23 Thread Yann-externe SOUBEYRAND
OK, so what do we do from now on? The patch I proposed seems to fix the 
bug which prevents from booting a degraded raid. However, I think this 
patch is a regression for slow to appear devices.

Michael, you told you won't maintain this package any more. Would you mind 
making an exception for this little patch if you decide it's worth 
including it despite the regression? Or do you want me to prepare a NMU? 
In that case, can you give me your opinion on the worthiness of including 
this patch?

Cheers

Yann

Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-24 Thread Lukasz T.
On Thu, 11 Jun 2015 20:07:09 +0200 "Robert.K."  wrote:
> On Thu, 11 Jun 2015 20:20:23 +0300 Michael Tokarev  wrote:
> > 11.06.2015 20:13, Robert.K. wrote:
> >
> > > The bug in this report (#784070) is about being dropped to a shell
when
> there are missing disks in a software RAID1 configuration upon boot.
> >
> > Ok, this makes sense.
> >
> > It is not RAID1 it is any RAID level, and it has nothing to do with GPT.
> >
> > /mjt
> >
> >
>
> I agree that it may be related to any RAID level. For me it was only
> related to RAID1 as I have only tried RAID1 configurations.
>
> I have not mentioned GPT for what I know.
>
> I am sorry if I have made you upset, I was only trying to help both the
> development and other people hitting the bug.
>
> Good luck if you try to solve it and try to work with the d-i team (debian
> installer?) again, you seem to know what you are doing.
>
> r

I think the problem is in file
/usr/share/initramfs-tools/scripts/local-top/mdadm on line 79
(mdadm-3.3.2-5-amd64).
The solution is replace line 79:

log_failure_msg "failed to assemble all arrays."

by:

log_warning_msg "failed to assemble all arrays...attempting individual starts"
for dev in $(cat /proc/mdstat | grep md | cut -d ' ' -f 1); do
log_begin_msg "attempting mdadm --run $dev"
if $MDADM --run $dev; then
  verbose && log_success_msg "started $dev"
else
  log_failure_msg "failed to start $dev"
fi
done

And that works. Founded and tested on polish debian forum.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-11 Thread Robert.K.
On Thu, 11 Jun 2015 20:20:23 +0300 Michael Tokarev  wrote:
> 11.06.2015 20:13, Robert.K. wrote:
>
> > The bug in this report (#784070) is about being dropped to a shell when
there are missing disks in a software RAID1 configuration upon boot.
>
> Ok, this makes sense.
>
> It is not RAID1 it is any RAID level, and it has nothing to do with GPT.
>
> /mjt
>
>

I agree that it may be related to any RAID level. For me it was only
related to RAID1 as I have only tried RAID1 configurations.

I have not mentioned GPT for what I know.

I am sorry if I have made you upset, I was only trying to help both the
development and other people hitting the bug.

Good luck if you try to solve it and try to work with the d-i team (debian
installer?) again, you seem to know what you are doing.

r


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-11 Thread Michael Tokarev
11.06.2015 20:13, Robert.K. wrote:

> The bug in this report (#784070) is about being dropped to a shell when there 
> are missing disks in a software RAID1 configuration upon boot.

Ok, this makes sense.

It is not RAID1 it is any RAID level, and it has nothing to do with GPT.

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-11 Thread Robert.K.
I clarify:

If rootdelay was confusing then forget all about rootdelay. It has nothing
todo with the problem this bug (#784070) is about, just another problem
that you may encounter before or after hitting this bug when the system
waits for slow devices.

The bug in this report (#784070) is about being dropped to a shell when
there are missing disks in a software RAID1 configuration upon boot.

r


2015-06-11 19:03 GMT+02:00 Robert.K. :

> The RAID1 was a RAID1 and worked normally when both disks were present.
> But with only one RAID1 disk connected then mdadm gave up waiting for root
> device and was dropped to an initramfs shell. THEN mdadm --detail showed
> RAID1 devices as RAID0 inside the initramfs-shell.
>
> Please look at Message #17 in this (#784070) bug report, this guy gets the
> same result:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784070#17
>
> I cite message 17:
>
> "Description: What happens is the array becomes inactive on any disk
>
> removal(degraded?), marked as RAID0(for some reason) and all attached
> disks are marked as [S] (for spare) upon reboot.
> However, it is possible to boot from it by starting it in the
> "(initramfs)" shell (which it drops to because it "cannot mount root
> device") by using:
>
> (initramfs):  mdadm --run /dev/md0
> (initramfs):  mdadm --run /dev/md1
> (initramfs):  exit"
>
> rootdelay alone does not solve the problem. rootdelay=15 (not rootwait) works 
> TOGETHER with the local-top script from serverfault, which you can find here 
> - this link is in message #54: 
> http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot
>
> There is also a suggestion to what the problem is on serverfault, I cite from 
> serverfault, see link above:
>
> "With the version of mdadm shipping with Debian Jessie, the --run parameter 
> seems to be ignored when used in conjunction with --scan. According to the 
> man page it is supposed to activate all arrays even if they are degraded. But 
> instead, any arrays that are degraded are marked as 'inactive'. If the root 
> filesystem is on one of those inactive arrays, the boot process is halted."
>
> The reason I mentioned rootdelay was because you were talking about the need 
> of timeouts for slow devices in message #49. I remembered that adding 
> rootdelay solved my timing problem for slow devices and allowed my USB disk 
> to become available.
>
> I think we should leave out the issue that is fixed by rootdelay from this as 
> it belongs to another bug/problem.
>
> r
>
>


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-11 Thread Robert.K.
The RAID1 was a RAID1 and worked normally when both disks were present. But
with only one RAID1 disk connected then mdadm gave up waiting for root
device and was dropped to an initramfs shell. THEN mdadm --detail showed
RAID1 devices as RAID0 inside the initramfs-shell.

Please look at Message #17 in this (#784070) bug report, this guy gets the
same result:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784070#17

I cite message 17:

"Description: What happens is the array becomes inactive on any disk

removal(degraded?), marked as RAID0(for some reason) and all attached
disks are marked as [S] (for spare) upon reboot.
However, it is possible to boot from it by starting it in the
"(initramfs)" shell (which it drops to because it "cannot mount root
device") by using:

(initramfs):  mdadm --run /dev/md0
(initramfs):  mdadm --run /dev/md1
(initramfs):  exit"

rootdelay alone does not solve the problem. rootdelay=15 (not
rootwait) works TOGETHER with the local-top script from serverfault,
which you can find here - this link is in message #54:
http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot

There is also a suggestion to what the problem is on serverfault, I
cite from serverfault, see link above:

"With the version of mdadm shipping with Debian Jessie, the --run
parameter seems to be ignored when used in conjunction with --scan.
According to the man page it is supposed to activate all arrays even
if they are degraded. But instead, any arrays that are degraded are
marked as 'inactive'. If the root filesystem is on one of those
inactive arrays, the boot process is halted."

The reason I mentioned rootdelay was because you were talking about
the need of timeouts for slow devices in message #49. I remembered
that adding rootdelay solved my timing problem for slow devices and
allowed my USB disk to become available.

I think we should leave out the issue that is fixed by rootdelay from
this as it belongs to another bug/problem.

r


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-11 Thread Michael Tokarev
11.06.2015 14:21, Robert.K. wrote:
> I apologize if I missed something, but ONLY adding rootdelay=XX guessed 
> seconds does not help against being dropped to an initramfs-shell.
> 
> There may be two different bugs?
> 
> One for when not waiting for slow devices but the boot continues, which is 
> cured by rootdelay=xx. This error/bug has the message "Found some drive for 
> an array that is already active" and blinks by when booting. I guess this is 
> #714155 ?

No.  This is something entirely different, and is specific to your setup.

> And another - much worser one - that halts the boot process and drops to an 
> initramfs-shell where some of the md-devices are shown as RAID0 instead of 
> RAID1 when doing mdadm --detail /dev/mdX

Here's your problem.  I've no idea how this happened, but you have a messed-up
configuration of your array(s) (hopefully just one).  This is neither #714155
nor #784070 and it is something which, unless there's a serious bug in mdadm,
should not happen at all.  As far as I understand, your raid1 somehow become
raid0, which is kind of impossible without manual intervention and/or messing
up with metadata.

Please note that your first email (to which I replied) in this thread was
really different, it shows successful result when adding rootwait.  Now you
say it doesn't work and your raid array has been converted to raid0.

If you need your particular problem to be resolved (if it is possible to
recover from that state) please collect all information (mdadm --detail,
mdadm --examine for all relevant devices, configuration which you expect
and configuration which you actually have, contents of mdadm.conf) and
post to linux-r...@vger.kernel.org asking for support.  When doing so,
please refrain from using phrases like "Adding bootdegraded=1 and such"
and "after forcing a boot etc" -- thsese are not helpful at all, there's
no action "such" and "etc", this might mean anything and we don't have
crystal ball to read your mind.

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-11 Thread Robert.K.
I apologize if I missed something, but ONLY adding rootdelay=XX guessed
seconds does not help against being dropped to an initramfs-shell.

There may be two different bugs?

One for when not waiting for slow devices but the boot continues, which is
cured by rootdelay=xx. This error/bug has the message "Found some drive for
an array that is already active" and blinks by when booting. I guess this
is #714155 ?

And another - much worser one - that halts the boot process and drops to an
initramfs-shell where some of the md-devices are shown as RAID0 instead of
RAID1 when doing mdadm --detail /dev/mdX from the initramfs-shell or after
forcing a boot with mdadm --run etc from the initramfs shell. This
error/bug has the message "Gave up waiting for root device..."  And
this is #784070
?

r


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-10 Thread Michael Tokarev
10.06.2015 13:40, Robert.K. wrote:
> I am also hit by this bug in Debian Jessie 8. I got it in a virtualized 
> Virtualbox machine with one virtual disk (VDI file) and one disk attached 
> through USB where the root was located on a md device.

This is #714155 which is fixed by increasing rootdelay and has nothing to do 
with #784070.

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-10 Thread Ben King
I have same issue and it dosent seem to be limited to gpt


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-06-10 Thread Robert.K.
I am also hit by this bug in Debian Jessie 8. I got it in a virtualized
Virtualbox machine with one virtual disk (VDI file) and one disk attached
through USB where the root was located on a md device.

The first error message that appeared was: Gave up waiting for root device
It is not mentioned in this bug report so if people search for the message
they won't find it, until now hopefully.

I used the workaround described on serverfault which works for now. Adding
bootdegraded=1 and such to the kernel command line does not work.

I don't know if it has to do with mdadm but before trying to  detach one of
the disks (to test an eventual disk failure) I in fact had problems with
mdadm not waiting for the slower USB disk to become available. I solved
this by adding rootdelay=1 to /etc/default/grub

I changed this line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"

To this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet rootdelay=15"

And then I ran update-grub.

I consider this as a critical bug. RAID1 systems which should be more fail
tolerant may currently not boot in Debian Jessie 8 if one of the disks are
out of order.

r


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-31 Thread Michael Tokarev
31.05.2015 23:05, Info Geek wrote:
> I'm not into scripting voodo but if this is of any help this is a reply I got 
> when I asked about it, before the bug report:
> 
> 
> http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot

This is a wrong solution.  I mentioned the right solution in my
previous reply.  The solution offered on serverfault moves us back
to one-time raid array assembly which does not work for slow
devices.

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-31 Thread Info Geek
I'm not into scripting voodo but if this is of any help this is a reply 
I got when I asked about it, before the bug report:



http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot 



--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-31 Thread Michael Tokarev
31.05.2015 18:56, Marc Meledandri wrote:
> Is any further info needed here?
> 
> I've run into this bug with _both_ GPT and MBR partition tables.

It is independent of the type of underlying devices.

The problem is that with incremental array assembly, when not all
devices are present, we need some timer, and have to run the array
with whatever number of devices available as we currently have.

This is something I overlooked when switching from one-time array
assembly to incremental mode.  Incremental mode is needed when
the underlying devices are slow, there were multiple bugreports
about debian mdadm which can't assemble raid arrays on devices
such as mpt or usb.

Current initramfs-tools infrastructure does not have necessary
infrastructure for such a timeout.

But more important, personally I can't work with any package
which touches debian-installer, because apparently some of the
more important d-i team members dislikes me.  So I can't really
fix anything in mdadm anymore, as it produces d-i component.

Thanks,

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-31 Thread Marc Meledandri
Is any further info needed here?

I've run into this bug with _both_ GPT and MBR partition tables.

As mentioned, this is a regression from Wheezy behavior.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-13 Thread mda ml
I can confrm the bug.
after boot, in intramfs busybox, the array is installed but in inactive
state.
The array can be started with

mdadm --run /dev/md0

the array will start as expected in degraded mode,
but this doesn't solve the issue.
It should be done automatically at boot.


Testing config:
virtualbox with 2 disk (MBR) configured in raid 1
Tried with and without LVM.
Detaching a disk and booting the os fire up busybox.

Fully reproducible

Still working on


Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-09 Thread Pascal Hambourg
I can reproduce the problem, which did not happen with Wheezy.
UEFI boot, fresh Debian 8 amd64, RAID 1 on two GPT disks.

Another person experienced it too on disks with legacy MBR/MSDOS
partition scheme, so I do not think it is related to GPT.

Note that this does not happen when the missing member was previously
recorded as faulty.

The problem seems to be caused by the incremental assembly performed by
udev in /lib/udev/rules.d/64-md-raid-assembly.rules. When a member is
missing, it leaves the array in an "inactive" state. The classic
assembly performed in the mdadm initramfs script
(/usr/share/initramfs-tools/scripts/local-top/mdadm) does not activate
it, probably because the array already exists.

Should the mdadm script force activation of required degraded arrays
with mdadm --run ?


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-03 Thread Info Geek

You can easily reproduce it in VirtualBox:

0) Create 3 virtual HDDs and attach them to system OS.

1) Partition each RAID1 member using GPT:
   1. Bios GRUB - 1 MB
   2. RAID partition
   3. RAID partition

2) Use:
 sd[a-c]3 as RAID1 EXT4(md0);
 sd[a-c]2 as RAID1 swap (md1) .

3) Install base system on md0.

4) Install GRUB on all three RAID1 members.

5) Boot up and wait for full resync.

6) Shutdown virtual machine and remove/detach any one or two virtual HDDs.

7) Boot the machine and see THE PROBLEM.


Description: What happens is the array becomes inactive on any disk 
removal(degraded?), marked as RAID0(for some reason) and all attached 
disks are marked as [S] (for spare) upon reboot.
However, it is possible to boot from it by starting it in the 
"(initramfs)" shell (which it drops to because it "cannot mount root 
device") by using:


(initramfs):  mdadm --run /dev/md0
(initramfs):  mdadm --run /dev/md1
(initramfs):  exit

after which, it boots up fine and shows the arrays as degraded(and will 
continue to boot fine even if rebooted UNTIL one or more disks get 
removed, in which case it will do exactly the same as with a fresh 
installed array, i.e. drop to a (initramfs) shell).


The problem is how to allow the boot of degraded array without manually 
starting them eahc time a disk is removed/dies?


Extra Information:

 * Google spit out a bunch of posts about UBUNTU using
   "*BOOT_DEGRADED=true*" but that doesn't work for DEBIAN.
 * There is alsoa post about using "*md-mod.start_dirty_degraded=1*" as
   a boot argument to the kernel image
   
.
   I have tried passing it in GRUB menu option, with no avail.
 * There might besomething that explains it
   , but I am a newbie
   to understand :(


Links in the above text:
[1] 
http://serverfault.com/questions/196445/boot-debian-while-raid-array-is-degraded

[2] https://www.kernel.org/doc/Documentation/md.txt


On 2015-05-03 23:32, Michael Tokarev wrote:

Control: tag -1 + moreinfo unreproducible

02.05.2015 21:41, Sad Person wrote:

Package: mdadm
Version: 3.3.2-5
Severity: critical
-- Package-specific info:
--- mdadm.conf
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST 
MAILADDR root

...cut...




Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-03 Thread Michael Tokarev
Control: tag -1 + moreinfo unreproducible

02.05.2015 21:41, Sad Person wrote:
> Package: mdadm
> Version: 3.3.2-5
> Severity: critical

> -- Package-specific info:
> --- mdadm.conf
> CREATE owner=root group=disk mode=0660 auto=yes
> HOMEHOST 
> MAILADDR root
...cut...


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Processed: Re: Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-03 Thread Debian Bug Tracking System
Processing control commands:

> tag -1 + moreinfo unreproducible
Bug #784070 [mdadm] mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does 
not mount/boot on disk removal
Added tag(s) unreproducible and moreinfo.

-- 
784070: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784070
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#784070: mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does not mount/boot on disk removal

2015-05-02 Thread Sad Person
Package: mdadm
Version: 3.3.2-5
Severity: critical



-- Package-specific info:
--- mdadm.conf
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST 
MAILADDR root
ARRAY /dev/md1  metadata=1.2 UUID=856fee1e:feccb34a:f798724a:36e91658 
name=FluffyBunny:1
ARRAY /dev/md0  metadata=1.2 UUID=b597bb3c:f50fff2a:f395548b:ccbb48d1 
name=FluffyBunny:0

--- /etc/default/mdadm
INITRDSTART='all'
AUTOCHECK=true
START_DAEMON=true
DAEMON_OPTIONS="--syslog"
VERBOSE=false

--- /proc/mdstat:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[0] sdc2[1] sdd2[2]
  7808000 blocks super 1.2 [3/3] [UUU]
  
md0 : active raid1 sdd3[2] sdc3[1] sdb3[0]
  1945569280 blocks super 1.2 [3/3] [UUU]
  bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: 

--- /proc/partitions:
major minor  #blocks  name

  1101048575 sr0
   80  976762584 sda
   81  976761560 sda1
   8   16 1953514584 sdb
   8   17   1024 sdb1
   8   187812096 sdb2
   8   19 1945700352 sdb3
   8   32 1953514584 sdc
   8   33   1024 sdc1
   8   347812096 sdc2
   8   35 1945700352 sdc3
   8   48 1953514584 sdd
   8   49   1024 sdd1
   8   507812096 sdd2
   8   51 1945700352 sdd3
   90 1945569280 md0
   917808000 md1

--- LVM physical volumes:
LVM does not seem to be used.
--- mount output
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=2041852,mode=755)
devpts on /dev/pts type devpts 
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=3270412k,mode=755)
/dev/md0 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs 
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup 
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup 
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup 
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup 
(rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup 
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup 
(rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs 
(rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)

--- initrd.img-3.16.0-4-amd64:
92433 blocks
599bbf3fe6093157a26863dcb59cdf5d  ./scripts/local-top/mdadm
da9588388e9d3dbd528a6e7aa2dd3fe7  ./sbin/mdadm
5e35b64dad54196cb721776566c04545  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/raid456.ko
a446065a313394c333967e3540d32923  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/raid10.ko
b9e70a0d04a34eecc2fff9722a27af99  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/dm-mod.ko
faf18e3ae8bbb882f45fa147450ba375  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/multipath.ko
9bd0a33fa738d559a1c4ff83cace11ba  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/linear.ko
b0d26ac52a08e149dd76407ede65366f  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/md-mod.ko
7a737d39c9b208cb27ada8751b601bd9  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/raid1.ko
f5f26f436a749b8d908e4155764e5d64  
./lib/modules/3.16.0-4-amd64/kernel/drivers/md/raid0.ko
4fa29a08bf629f67a35e34443a6290e4  ./conf/mdadm
d3be82c0f275d6c25b04d388baf9e836  ./etc/modprobe.d/mdadm.conf
ae5f7761fa2cff7b86e81fed2ee5db1f  ./etc/mdadm/mdadm.conf

--- initrd's /conf/conf.d/md:
no conf/md file.

--- /proc/modules:
raid1 34596 2 - Live 0xa0078000
md_mod 107672 3 raid1, Live 0xa017

--- /var/log/syslog:

--- volume detail:
/dev/sda:
   MBR Magic : aa55
Partition[0] :   1953523120 sectors at 2048 (type 83)
--
/dev/sda1 is not recognised by mdadm.
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   3907029167 sectors at1 (type ee)
--
/dev/sdb1 is not recognised by mdadm.
/dev/sdb2:
  Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
 Array UUID : 856fee1e:feccb34a:f798724a:36e91658
   Name : FluffyBunny:1  (local to host FluffyBunny)
  Creation Time : Sat May  2 02:12:51 2015