(resending now that the bug is unarchived, sorry for duplicates)
this bug was fixed in 3.3.4-1.1 but the 3.4-1 upload didnt acknowledge
the NMU, so the patch is lost (it is not present in
https://anonscm.debian.org/cgit/pkg-mdadm/mdadm.git/tree/debian/patches)
- please either integrate the patch f
OK, so what do we do from now on? The patch I proposed seems to fix the
bug which prevents from booting a degraded raid. However, I think this
patch is a regression for slow to appear devices.
Michael, you told you won't maintain this package any more. Would you mind
making an exception for thi
On Thu, 11 Jun 2015 20:07:09 +0200 "Robert.K." wrote:
> On Thu, 11 Jun 2015 20:20:23 +0300 Michael Tokarev wrote:
> > 11.06.2015 20:13, Robert.K. wrote:
> >
> > > The bug in this report (#784070) is about being dropped to a shell
when
> there are missing disks in a software RAID1 configuration up
On Thu, 11 Jun 2015 20:20:23 +0300 Michael Tokarev wrote:
> 11.06.2015 20:13, Robert.K. wrote:
>
> > The bug in this report (#784070) is about being dropped to a shell when
there are missing disks in a software RAID1 configuration upon boot.
>
> Ok, this makes sense.
>
> It is not RAID1 it is any
11.06.2015 20:13, Robert.K. wrote:
> The bug in this report (#784070) is about being dropped to a shell when there
> are missing disks in a software RAID1 configuration upon boot.
Ok, this makes sense.
It is not RAID1 it is any RAID level, and it has nothing to do with GPT.
/mjt
--
To UNSUB
I clarify:
If rootdelay was confusing then forget all about rootdelay. It has nothing
todo with the problem this bug (#784070) is about, just another problem
that you may encounter before or after hitting this bug when the system
waits for slow devices.
The bug in this report (#784070) is about b
The RAID1 was a RAID1 and worked normally when both disks were present. But
with only one RAID1 disk connected then mdadm gave up waiting for root
device and was dropped to an initramfs shell. THEN mdadm --detail showed
RAID1 devices as RAID0 inside the initramfs-shell.
Please look at Message #17
11.06.2015 14:21, Robert.K. wrote:
> I apologize if I missed something, but ONLY adding rootdelay=XX guessed
> seconds does not help against being dropped to an initramfs-shell.
>
> There may be two different bugs?
>
> One for when not waiting for slow devices but the boot continues, which is
>
I apologize if I missed something, but ONLY adding rootdelay=XX guessed
seconds does not help against being dropped to an initramfs-shell.
There may be two different bugs?
One for when not waiting for slow devices but the boot continues, which is
cured by rootdelay=xx. This error/bug has the mess
10.06.2015 13:40, Robert.K. wrote:
> I am also hit by this bug in Debian Jessie 8. I got it in a virtualized
> Virtualbox machine with one virtual disk (VDI file) and one disk attached
> through USB where the root was located on a md device.
This is #714155 which is fixed by increasing rootdelay
I have same issue and it dosent seem to be limited to gpt
I am also hit by this bug in Debian Jessie 8. I got it in a virtualized
Virtualbox machine with one virtual disk (VDI file) and one disk attached
through USB where the root was located on a md device.
The first error message that appeared was: Gave up waiting for root device
It is not mentioned in
31.05.2015 23:05, Info Geek wrote:
> I'm not into scripting voodo but if this is of any help this is a reply I got
> when I asked about it, before the bug report:
>
>
> http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot
This is a wrong s
I'm not into scripting voodo but if this is of any help this is a reply
I got when I asked about it, before the bug report:
http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot
--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.d
31.05.2015 18:56, Marc Meledandri wrote:
> Is any further info needed here?
>
> I've run into this bug with _both_ GPT and MBR partition tables.
It is independent of the type of underlying devices.
The problem is that with incremental array assembly, when not all
devices are present, we need som
Is any further info needed here?
I've run into this bug with _both_ GPT and MBR partition tables.
As mentioned, this is a regression from Wheezy behavior.
--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debi
I can confrm the bug.
after boot, in intramfs busybox, the array is installed but in inactive
state.
The array can be started with
mdadm --run /dev/md0
the array will start as expected in degraded mode,
but this doesn't solve the issue.
It should be done automatically at boot.
Testing config:
v
I can reproduce the problem, which did not happen with Wheezy.
UEFI boot, fresh Debian 8 amd64, RAID 1 on two GPT disks.
Another person experienced it too on disks with legacy MBR/MSDOS
partition scheme, so I do not think it is related to GPT.
Note that this does not happen when the missing membe
You can easily reproduce it in VirtualBox:
0) Create 3 virtual HDDs and attach them to system OS.
1) Partition each RAID1 member using GPT:
1. Bios GRUB - 1 MB
2. RAID partition
3. RAID partition
2) Use:
sd[a-c]3 as RAID1 EXT4(md0);
sd[a-c]2 as RAID1 swap (md1) .
3)
Control: tag -1 + moreinfo unreproducible
02.05.2015 21:41, Sad Person wrote:
> Package: mdadm
> Version: 3.3.2-5
> Severity: critical
> -- Package-specific info:
> --- mdadm.conf
> CREATE owner=root group=disk mode=0660 auto=yes
> HOMEHOST
> MAILADDR root
...cut...
--
To UNSUBSCRIBE, email t
Processing control commands:
> tag -1 + moreinfo unreproducible
Bug #784070 [mdadm] mdadm Software RAID1 with GPT on Debian 8.0.0 amd64 - Does
not mount/boot on disk removal
Added tag(s) unreproducible and moreinfo.
--
784070: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784070
Debian Bug T
Package: mdadm
Version: 3.3.2-5
Severity: critical
-- Package-specific info:
--- mdadm.conf
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST
MAILADDR root
ARRAY /dev/md1 metadata=1.2 UUID=856fee1e:feccb34a:f798724a:36e91658
name=FluffyBunny:1
ARRAY /dev/md0 metadata=1.2 UUID=b597bb3c
22 matches
Mail list logo