Your message dated Mon, 28 Mar 2011 21:33:42 +0100
with message-id <201103282133.43105.el...@debianpt.org>
and subject line Re: Bug#572909: on lvm on raid: failed
has caused the Debian Bug report #572909,
regarding installation on lvm on raid: failed
to be marked as done.
This means th
Hi
After modifying /etc/default/grub and updating grub, I restarted the
machine, and... it booted up without problem.
Kill me now! I must have made mistakes in tweaking boot menu.
I feel very sorry to have troubled you and taken your time.
Obrigado.
On Mon, 28 Mar 2011 13:36:30 +0900
水原 文 wrote
Hi
I have used rootdelay=9, as described in the release-notes.
Frankly, I am not familiar with grub2, and as I specified it from the
grub boot menu, there is a slight chance that I made some mistakes.
I did some research and found that I can wirte
GRUB_CMDLINE_LINUX="rootdelay=9"
in the file /etc
A Sábado 26 Março 2011 09:11:03 MIZUHARA Bun você escreveu:
> Package: installation-reports
> Severity: normal
>
> It seems that I am having the same problem as Herbert had.
> Specifying "rootdelay" did not help, but after countless installation retries,
> I managed to figure out a recovery proces
Hi,
please check the installation log [1] on what happened.
I see you have a scsi storage controller and there are known issues while
installing with hw-detect bellow version 1.85 (see #611314).
Can you send the installation log (gzipped) ?
1 - /var/log/installer/syslog
--
Melhores cumprimen
Package: installation-reports
Severity: normal
It seems that I am having the same problem as Herbert had.
Specifying "rootdelay" did not help, but after countless installation retries,
I managed to figure out a recovery process from the shell prompt on initramfs.
(initramfs) sh scripts/local-top/
blems:
- the system has one disk connected to the chipset IDE controller:
it is /dev/sdc in the list above. The system is configured to
boot from this disk. Additionally, it has two identical disks
connected to an onboard Promise PDC20256 UDMA133 controller
used as LVM on RAID-1 to install sq
Otavio Salvador schrieb:
> Herbert, can you reproduce it? We need more information and the syslog
> file of the installer (gzipped) so we can identify where it is
> failing.
Sorry, the old files are gone, and I am just about to leave
for a two weeks vacation. I will try to reproduce the bug
when
Hello,
On Mon, Aug 23, 2010 at 5:51 AM, Colin Watson wrote:
> On Sun, Aug 22, 2010 at 10:16:30PM -0300, Otavio Salvador wrote:
>> This boot failure of LVM above RAID looks to be a GRUB issue;
>
> It doesn't look like it to me. The error message is from the initramfs,
> which implies that GRUB lo
On Sun, Aug 22, 2010 at 10:16:30PM -0300, Otavio Salvador wrote:
> This boot failure of LVM above RAID looks to be a GRUB issue;
It doesn't look like it to me. The error message is from the initramfs,
which implies that GRUB loaded the kernel and initramfs successfully.
The only way in which this
Hello Herbert,
This boot failure of LVM above RAID looks to be a GRUB issue; I am
unsure if this has been fixed or not so I am copying Colin on the
message for he give his feedback on this when possible.
Cheers,
--
Otavio Salvador O.S. Systems
E-mail: ota...@ossystems.com.br h
pass rootdelay=X bootparam as documented in debian-release
where X is a nr in second depending on your hardware
X >= 9 may help.
--
maks
--
To UNSUBSCRIBE, email to debian-boot-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: htt
Package: installation-reports
Boot method: businesscard cd
Image version:
http://cdimage.debian.org/cdimage/daily-builds/daily/arch-latest/i386/iso-cd/debian-testing-i386-businesscard.iso
Date: 06-MAR-2010 18:00
Machine: made from parts
Processor: Athlon XP 1800+
Memory: 1GByte
Partitions:
Di
oup because the volume group was in use. I also added
> some mdadm commands to have it shutdown the raid and those
> would refuse to do it because it was still in use.
I've tried to reproduce this in vmware, but cannot.
I do admit that the messages on shutdown [1] with / on lvm on raid
Processing commands for [EMAIL PROTECTED]:
> reassign 403299 mdadm
Bug#403299: (no subject)
Bug reassigned from package `installation-reports' to `mdadm'.
> thanks
Stopping processing here.
Please contact me if you need assistance.
Debian bug tracking system administrator
(administrator, Debian
On Sat, 16 Dec 2006, Frans Pop wrote:
I have some doubts about that. / on LVM on RAID is known to work
correctly. Are you sure that the RAID had been fully synced before you
shut down? It seems to me the most likely cause of it being dirty was
that the drives had not yet had a chance to complete
I just tried a root-on-RAID1 setup and the only issue was that GRUB was
only installed to the MBR of the first hard disk, so when I simulated
the failure of the first disk, the system would not boot.
--
Sam Morris
http://robots.org.uk/
PGP key id 1024D/5EA01078
3412 EA18 1277 354B 991B C869 B21
s the "lvm"
flag for a "dummy" partition that is created by partman in
init.d/31md-devices.
Earlier hacks in partman-lvm have already made sure that partman itself
does not rely only on the flag, but sets "method" instead (based on the
output of 'pvdisplay')
sets the "lvm"
flag for a "dummy" partition that is created by partman in
init.d/31md-devices.
Earlier hacks in partman-lvm have already made sure that partman itself
does not rely only on the flag, but sets "method" instead (based on the
output of 'pvdisplay
On Saturday 08 July 2006 18:40, Jens Seidel wrote:
> > For Sarge installations a different, but similar message was shown:
> >"The kernel was unable to re-read the partition table on /dev/md/0
> > (Invalid argument). This means Linux won't know anything nothing
>
> Is this "anything nothin
On Sat, Jul 08, 2006 at 06:15:51PM +0200, Frans Pop wrote:
> After a RAID device has been set up and that device has been selected "for
> use with LVM", the following message is displayed when changes to devices
> and partitions are committed to disk before configuring LVM:
>"Error informing
d by partman in
init.d/31md-devices.
Earlier hacks in partman-lvm have already made sure that partman itself
does not rely only on the flag, but sets "method" instead (based on the
output of 'pvdisplay') in case of LVM on RAID. In line with this and as
the flag is neve
Your message dated Sat, 08 Jul 2006 08:53:40 +0200
with message-id <[EMAIL PROTECTED]>
and subject line Bug#268433: no lvm on raid?
has caused the attached Bug report to be marked as done.
This means that you claim that the problem has been dealt with.
If this is not the case it is no
David Härdeman wrote:
> If you assemble the array manually using mdadm you probably can (see the
> man page for mdadm, I think it's something like --assume-clean).
OK, thanks. This is --assume-clean, effectively.
> As a long term solution, support for
> block-device-with-no-partitions-present-or-
On Fri, June 2, 2006 12:10, Christophe Chisogne said:
>...
> Configuring md seems to work, but syncing is slow: /dev/md2 is about
> 200Go.
> (is there a way to avoid the resync, as it was already done previously
> btw?)
If you assemble the array manually using mdadm you probably can (see the
man p
I'm trying to install Etch with d-i beta2 on a box with 2-SATA disks,
with software raid (md) and lvm, but I think I found a bug in d-i :-(
The config I'm trying to acheive (/boot and / not on lvm to avoid problems)
/dev/md0 (/dev/sd[ab]3) for /boot
/dev/md1 (/dev/sd[ab]6) for /
also sprach Colin Watson <[EMAIL PROTECTED]> [2004.11.03.1156 +0100]:
> The following words may be recognized as reserved words on some
> implementations (when none of the characters are quoted), causing
> unspecified results:
>
> [[ ]] function select
Mh, you win.
> How
On Thu, Oct 28, 2004 at 10:17:40AM +0200, martin f krafft wrote:
> also sprach Colin Watson <[EMAIL PROTECTED]> [2004.10.25.2043 +0200]:
> > You're on debian-boot at the moment. [[ ... ]] is a bashism, not
> > supported by busybox, and therefore useless in d-i.
>
> As far as I can tell, [[/]] is P
also sprach Colin Watson <[EMAIL PROTECTED]> [2004.10.25.2043 +0200]:
> You're on debian-boot at the moment. [[ ... ]] is a bashism, not
> supported by busybox, and therefore useless in d-i.
As far as I can tell, [[/]] is POSIX 1003.2 syntax. Therefore, it's
more like busybox cannot do POSIX.
--
On Mon, Oct 25, 2004 at 07:36:05PM +0200, martin f krafft wrote:
> also sprach Andrew Pollock <[EMAIL PROTECTED]> [2004.10.25.0020 +0200]:
> > - # Loop until the correct amount of active devices has been selected
> > - while [ "${SELECTED}" -ne "${DEV_COUNT}" ]; do
> > + # Loop un
also sprach Andrew Pollock <[EMAIL PROTECTED]> [2004.10.25.0020 +0200]:
> - # Loop until the correct amount of active devices has been selected
> - while [ "${SELECTED}" -ne "${DEV_COUNT}" ]; do
> + # Loop until at least one device has been selected
> + until [ "${SELECTED}"
Your message dated Mon, 25 Oct 2004 11:47:05 -0400
with message-id <[EMAIL PROTECTED]>
and subject line Bug#275714: partman: LVM on RAID unavailable
has caused the attached Bug report to be marked as done.
This means that you claim that the problem has been dealt with.
If this is not the c
This seems to be resolved in the daily build from
http://cdimage.debian.org/pub/cdimage-testing/daily/i386/20041022/sarge-i386-netinst.iso
So I guess it's ok to close this bug now?
--
Fabian Fagerholm <[EMAIL PROTECTED]>
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubsc
On Sun, Oct 24, 2004 at 02:16:24PM -0400, Joey Hess wrote:
> Andrew Pollock wrote:
> > Patch attached. Will submit it to the BTS shortly. I'm doing some tests with
> > qemu, but it's slooow. Would appreciate some testing by others...
>
> Wouldn't this behave badly if the user selected 0 partitions
Andrew Pollock wrote:
> Patch attached. Will submit it to the BTS shortly. I'm doing some tests with
> qemu, but it's slooow. Would appreciate some testing by others...
Wouldn't this behave badly if the user selected 0 partitions? An array
in degraded mode of zero partitions probably won't work..
On Wed, Oct 20, 2004 at 01:30:38PM +0200, Christian Perrier wrote:
> Quoting Andrew Pollock ([EMAIL PROTECTED]):
>
> > That's a shame. I would have thought it'd be relatively easy to implement,
> > but I haven't given the innards of what's involved close inspection...
>
> Well, the non easiest th
On Wed, Oct 20, 2004 at 01:30:38PM +0200, Christian Perrier wrote:
> Quoting Andrew Pollock ([EMAIL PROTECTED]):
>
> > That's a shame. I would have thought it'd be relatively easy to implement,
> > but I haven't given the innards of what's involved close inspection...
>
> Well, the non easiest th
Quoting Andrew Pollock ([EMAIL PROTECTED]):
> That's a shame. I would have thought it'd be relatively easy to implement,
> but I haven't given the innards of what's involved close inspection...
Well, the non easiest thing is maybe getting someone volunteering for
implementing this..:-)
--
To U
On Wed, Oct 20, 2004 at 12:55:34PM +0200, Frans Pop wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Wednesday 20 October 2004 09:02, Andrew Pollock wrote:
> > Hey, can the installer handle setting up a degraded RAID array (i.e. a
> > RAID-1 mirror with the second disk missing) yet?
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wednesday 20 October 2004 09:02, Andrew Pollock wrote:
> Hey, can the installer handle setting up a degraded RAID array (i.e. a
> RAID-1 mirror with the second disk missing) yet? mdadm can certainly
> handle the concept.
Not ATM. There are 2 open w
standing of the
> > matter.
>
> Please use a current image; lvm on raid support was added 2 days ago.
Fwor! Gotta give that a go...
Hey, can the installer handle setting up a degraded RAID array (i.e. a
RAID-1 mirror with the second disk missing) yet? mdadm can certainly handle
also sprach Joey Hess <[EMAIL PROTECTED]> [2004.10.17.2046 +0200]:
> Please use a current image; lvm on raid support was added 2 days ago.
Okay. I *did* use 20041015 (did I say 20041005?), but I guess I must
have missed it by a day then... :)
--
Please do not CC me when replying to
martin f krafft wrote:
> So yesterday I tried d-i snapshot 20041005. I won't file an
> installation report because it was all smooth as silk, except for
> one problem... which may be related to my misunderstanding of the
> matter.
Please use a current image; lvm on raid support
So yesterday I tried d-i snapshot 20041005. I won't file an
installation report because it was all smooth as silk, except for
one problem... which may be related to my misunderstanding of the
matter.
I set up six RAID 1 partitions and wanted to have LVM for /srv and
/home to on top. However, d-i d
rsion: 1.0
From: "Olaf Mandel" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: root-on-LVM-on-RAID workarounds
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
Message-Id: <[EMAIL PROTECTED]>
Date: Sun, 15 Aug 2004 14:49:37 +0200 (CEST)
Package: debian-installer
Version: Daily snapshot 20041008
Severity: normal
When creating a RAID array in partman, there is no option to use the
array as storage for LVM. I can create a RAID array and then manually
use pvcreate and vgcreate to activate LVM on it. After that, I can
create logical v
Package: debian-installer
Raid partitions are not selectable as lvm pv's in debian-installer. This
should be possible.
Mike Stone
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
essor: AMD Athlon(tm) XP 2200+
Memory: 256MB
Root Device: LVM-on-RAID from 3 SATA hard-drives
(/dev/mapper/vg0-root -> /dev/md/1 -> /dev/discs/disc[012]/part3)
Root Size/partition table:
# sfdisk -d /dev/sda
# partition table of /dev/sda
unit: sectors
/dev/sda1 : start=1, size=
* Tim Day <[EMAIL PROTECTED]> [2004-07-06 01:05]:
> And dropping back to the partitioner, the raid device just shows a 81GB
> partition (no "lvm"), which when selected has "do not use".
The installer doesn't know that it can make lvm on top of RAID. You
should have been offered to create a filesy
Firing up the installer in expert mode, I found it easy enough
to get the partitioner to show me:
---
[!!] Partition disks
...
IDE1 master (hda) 82.3 GB ExcelStor etc etc
#2 primary 81.8 GB raid
#1 primary 509.9MB swapswap
IDE1 master (hdc) 82.3 GB ExcelStor etc etc
#2 primary 81.8 GB raid
Hi!
On Mon, Jun 21, 2004 at 03:28:29PM -0500, Charles Steinkuehler wrote:
> Martin Michlmayr wrote:
> >>3) Grub install fails when /boot is on a RAID1 device
> >
> >Still there.
>
> Attached are two patches to address this (one for grub-install, and one
> for update-grub). Patches are against t
Martin Michlmayr wrote:
* tbm <[EMAIL PROTECTED]> [2004-05-31 13:37]:
1) partman won't let you build an LVM on top of a RAID device
Still there. This needs changed to libparted - maybe you can look at
this.
I haven't looked at this. If it doesn't involve lots of coding, I may
be able to take a s
* tbm <[EMAIL PROTECTED]> [2004-05-31 13:37]:
> 1) partman won't let you build an LVM on top of a RAID device
Still there. This needs changed to libparted - maybe you can look at
this.
> 2) Kernel install portion of base install fails due to mkinitrd failure
> 2a) LVM tools missing
> 2b) Raidtoo
Package: lvmcfg
- Forwarded message from Charles Steinkuehler <[EMAIL PROTECTED]> -
From: Charles Steinkuehler <[EMAIL PROTECTED]>
Subject: Problems/workarounds for install to root on LVM on RAID
Date: Thu, 27 May 2004 11:02:17 -0500
To: [EMAIL PROTECTED]
User-Agent: Mozilla/
OK, I tried this again from 'bare metal', and verified the overall
procedure works, but there are a couple of additional issues and
corrections (inline).
Using the 20040526 netinst image, there seem to be no problems with the
initial ramdisk image created (it starts both the md device and lvm,
Martin Michlmayr wrote:
* Josha Foust <[EMAIL PROTECTED]> [2004-05-27 12:33]:
I didn't think that a new mkinitrd would be in testing for a few more days.
Even when it does, there's still a bug with LVM on root; see 249641:
initrd-tools: does not activate volume groups (LVM2).
A patch was posted May
* Josha Foust <[EMAIL PROTECTED]> [2004-05-27 12:33]:
> I didn't think that a new mkinitrd would be in testing for a few more days.
Even when it does, there's still a bug with LVM on root; see 249641:
initrd-tools: does not activate volume groups (LVM2).
--
Martin Michlmayr
[EMAIL PROTECTED]
-
* Charles Steinkuehler <[EMAIL PROTECTED]> [2004-05-27 12:15]:
> If you're referring to the display in partman, I saw similar behavior,
> which I attributed to a 'wrapping' problem. My 150G raid partition was
> listed as some small number of MB on one line, and the correct size on
> another. I
* Josha Foust <[EMAIL PROTECTED]> [2004-05-27 11:46]:
> A rather serious problem I encountered when running LVM on RAID is that as
> of a few days ago, RAID wasn't automatically activated on startup.
This has been fixed.
> I also had problems when deleting and re-adding lvm
get boot working on raid1.
Josha Foust
-Original Message-
From: Charles Steinkuehler [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 27, 2004 12:16 PM
To: Josha Foust
Cc: [EMAIL PROTECTED]
Subject: Re: Problems/workarounds for install to root on LVM on RAID
Josha Foust wrote:
> A r
Charles Steinkuehler wrote:
Josha Foust wrote:
The raid device also shows it only being 2.0 GB in size when the partition
underneath it is 79GB.
If you're referring to the display in partman, I saw similar behavior,
which I attributed to a 'wrapping' problem. My 150G raid partition was
listed as
Josha Foust wrote:
A rather serious problem I encountered when running LVM on RAID is that as
of a few days ago, RAID wasn't automatically activated on startup. This
caused LVM to find its partitions inside the RAID partition and mount one of
those. This is obviously a horrible thing to do
A rather serious problem I encountered when running LVM on RAID is that as
of a few days ago, RAID wasn't automatically activated on startup. This
caused LVM to find its partitions inside the RAID partition and mount one of
those. This is obviously a horrible thing to do as it breaks your
I'm trying to install testing onto an x86 box with root on LVM on RAID1,
and I ran into several issues with the latest daily image:
http://cdimage.debian.org/pub/cdimage-testing/sarge_d-i/i386/20040526/sarge-i386-netinst.iso
First, the problems:
1) partman won't let you build
Thanks to the work of others:
http://lists.debian.org/debian-boot/2003/debian-boot-200301/msg00115.html
http://www.midhgard.it/docs/index_en.html
I have a Debian system running with root on LVM on RAID. Kernel
updates, however, require either a manual update of an initrd file, or a
custom
Ok, I've looked at everything very fast (I'm in a hurry ;)
I'll examine all these problems in detail and update the how-to.
Thanks for the feedback (sorry for the pain).
If you think it can be useful send me info regarding RAID and LVM over
sparc. Maybe we can integrate that in the howto.
Bye
M
On Tue, Jan 21, 2003 at 09:51:58PM +0100, Thorsten Sauter wrote:
> On Tue, Jan 21, 2003 at 02:16:56PM -0600, Nathan E Norman wrote:
> > o Perhaps this was just me, but the default debian install starts both
> > LVM and RAID at S25 in /etc/rcS.d . If you aren't using raid
> > autodetect this can ca
On Tue, Jan 21, 2003 at 02:16:56PM -0600, Nathan E Norman wrote:
> o Perhaps this was just me, but the default debian install starts both
> LVM and RAID at S25 in /etc/rcS.d . If you aren't using raid
> autodetect this can cause problems because LVM starts first. I moved
> LVM to S27.
I have wri
On Fri, Jan 10, 2003 at 12:02:20PM +0100, Massimiliano Ferrero wrote:
> Hello,
>
> I've written a doc on how to install a Debian with root file system over
> LVM and RAID.
>
> If it's of any interest it can be found at this URL:
> http://www.midhgard.it/docs/index_en.html
>
> Any suggestion/cri
Hello,
I've written a doc on how to install a Debian with root file system over
LVM and RAID.
If it's of any interest it can be found at this URL:
http://www.midhgard.it/docs/index_en.html
Any suggestion/criticism/correction is welcome.
Massimiliano
--
Massimiliano Ferrero
Midhgard s.r.l.
C/
70 matches
Mail list logo