Re: Debian+LVM+RAID

2009-07-12 Thread martin f krafft
also sprach lee  [2009.07.12.0057 +0200]:
> Well, I gave the RAID a name, but that name got lost ... and it still
> has p designation, with kernel 2.6.30.

If you're asking a question, you should include all relevant
details.

-- 
 .''`.   martin f. krafft   Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
"the question of whether computers can think
 is like the question of whether submarines can swim."
 -- edsgar w. dijkstra


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Debian+LVM+RAID

2009-07-11 Thread Alex Samad
On Sat, Jul 11, 2009 at 04:40:02PM -0600, lee wrote:
> On Fri, Jul 10, 2009 at 07:04:26AM +1000, Alex Samad wrote:
> 
> > comes down to how much you value your data.
> 
> It comes down to how much money you can spend on securing it.

just about the same thing


> 
> > My home server has 10 x 1T drives in it a mix of raid1 + raid5 +
> > raid6, I have a second server with 9 x 1T drives in it (in the
> > garage) to do my backups - because it would take too long to send
> > off site and I don't want to spend money on a tape system - i value
> > my data - well I could afford to throw money at the problem.  But I
> > have some important info there, photos & video of my daughter etc
> > ...
> 
> Well, that's like $3000+ you spent on the drives alone, plus about
> another $2000 or so for the controller cards. About $8k in total? Then
> replace at least the disks about every three years. I don't have that
> kind of money (and not that much data). And the more drives you have,
> the more disks can fail.

Umm 1TB @ $115 =  ~$2K 

motherboard with 6 sata ~$200 + 2 x Adaptec SATA controllers ~$120 each


The drives were expensive yeap, but I have that much stuff (well once
you take out the backup server so 10 drivees and then raid6 - another 2
drives = ~8Tb worth of data which really equates to about 6T of data to
leave some head room)

so roughly ~$3K (aus dollars)


> 
> > > I'm not afraid of that. But the point remains that having less md
> > > devices reduces the chances that something goes wrong with their
> > > discovery. The point remains that reducing the complexity of a setup
> > > makes it easier to handle it (unless you reduced the complexity too
> > > much).
> > 
> > Ok to take this analogy even further why have 1T drives, why not stick
> > with 1G hard drives - less data less chance of errors.
> 
> Yes --- but you probably have a given amount of data to store. In any
> case, the more complexity your solution to store the data involves,
> the better the chances are that something goes wrong. That can be
> hardware or software as well as the user making a mistake. The more
> complex a system is a user is dealing with, the easier it is to make a
> mistake --- and software or hardware you are not using can't give you
> problems.

yes

> 
> > If you are building a large system or !!complex!! system, bit of
> > planning before hand, I set mine up and haven't had a problem with md, I
> > have lost some drives during the life of this server - the hardest thing
> > is matching drive letter to physical drive - I didn't attach them in
> > incremental order to the mother board (silly me)
> 
> Yeah, I know what you mean. The cables should all be labeled and
> things like that ...

it all about making assumption we do it all the time, based on our
previous experiences.

> 
> > > There's nothing on /etc that isn't replaceable. It's nice not to lose
> > > it, but it doesn't really matter. If I lost my data of the last 15
> > > years, I would have a few problems --- not unsolvable ones, I guess,
> > > but it would be utterly inconvenient. Besides that, a lot of that data
> > > is irreplaceable. That's what I call I a loss. Considering that, who
> > > cares about /etc?
> > 
> > really what about all your certificates in /etc/ssl, or your machines
> > ssh keys,
> 
> There are certificates and ssh keys? I didn't put any there.

You don't run any https site nor use ldaps or ssl postgress connections.
I think you will find you system ssh keys are there :)

> 
> > or all that configuration information for your system mail,
> > ldap, userids, passwords, apache setup, postgress setup.
> 
> It's easy to keep a copy of the configuration file of the mail server
> on the /home partition --- and it's easy to re-create. There are only
> two userids, no ldap, no postgres, and the config for apache is
> totally messed up on Debian anyway since they split up the config file
> so that nobody can get an idea how it's configured.
> 
> Anyway, you can always have backups of /etc; it's not changing very
> frequently like /home.
> 
> > Admittedly you could re create these from memory but, there are some
> > things that you can't 
> 
> If you have data like that on /etc, you need a backup.

I would say that you are very lucky to not have to backup your /etc

> 
> > > What I was wondering about is what the advantage is of partitioning
> > > the disks and creating RAIDs from the partitions vs. creating a RAID
> > > from whole disks and partitioning the RAID?
> > 
> > I have to admit I have evaluated partitioning + raid v's raid +
> > partitioning, I think I would go with the previous, more system (old
> > linux box, windows boxes, mac boxes ) understand partitions - where as
> > not all OS understand raid + partitioning. And currently I don't see the
> > advantage to raid + partitioning 
> 
> Hm, is it possible to read/use a partition/file system that is part of
> a software-RAID without the RAID-software? In that case, I could see
> how it can be 

Re: Debian+LVM+RAID

2009-07-11 Thread lee
On Thu, Jul 09, 2009 at 10:44:30PM +0200, martin f krafft wrote:
> > You can still decide if you want a partitionable or non-partitionable
> > RAID, thus not all RAIDs are partitionable since kernel
> > 2.6.29. Unfortunately, the man page doesn't seem to say what the
> > default is for the partitionability of the RAID.
> 
> mdadm has, uh, conservative maintenance. mdp is no longer needed.
> "non-partitionable" arrays will be partitionable with newer kernels.

Well, I gave the RAID a name, but that name got lost ... and it still
has p designation, with kernel 2.6.30.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-11 Thread lee
On Fri, Jul 10, 2009 at 07:04:26AM +1000, Alex Samad wrote:

> comes down to how much you value your data.

It comes down to how much money you can spend on securing it.

> My home server has 10 x 1T drives in it a mix of raid1 + raid5 +
> raid6, I have a second server with 9 x 1T drives in it (in the
> garage) to do my backups - because it would take too long to send
> off site and I don't want to spend money on a tape system - i value
> my data - well I could afford to throw money at the problem.  But I
> have some important info there, photos & video of my daughter etc
> ...

Well, that's like $3000+ you spent on the drives alone, plus about
another $2000 or so for the controller cards. About $8k in total? Then
replace at least the disks about every three years. I don't have that
kind of money (and not that much data). And the more drives you have,
the more disks can fail.

> > I'm not afraid of that. But the point remains that having less md
> > devices reduces the chances that something goes wrong with their
> > discovery. The point remains that reducing the complexity of a setup
> > makes it easier to handle it (unless you reduced the complexity too
> > much).
> 
> Ok to take this analogy even further why have 1T drives, why not stick
> with 1G hard drives - less data less chance of errors.

Yes --- but you probably have a given amount of data to store. In any
case, the more complexity your solution to store the data involves,
the better the chances are that something goes wrong. That can be
hardware or software as well as the user making a mistake. The more
complex a system is a user is dealing with, the easier it is to make a
mistake --- and software or hardware you are not using can't give you
problems.

> If you are building a large system or !!complex!! system, bit of
> planning before hand, I set mine up and haven't had a problem with md, I
> have lost some drives during the life of this server - the hardest thing
> is matching drive letter to physical drive - I didn't attach them in
> incremental order to the mother board (silly me)

Yeah, I know what you mean. The cables should all be labeled and
things like that ...

> > There's nothing on /etc that isn't replaceable. It's nice not to lose
> > it, but it doesn't really matter. If I lost my data of the last 15
> > years, I would have a few problems --- not unsolvable ones, I guess,
> > but it would be utterly inconvenient. Besides that, a lot of that data
> > is irreplaceable. That's what I call I a loss. Considering that, who
> > cares about /etc?
> 
> really what about all your certificates in /etc/ssl, or your machines
> ssh keys,

There are certificates and ssh keys? I didn't put any there.

> or all that configuration information for your system mail,
> ldap, userids, passwords, apache setup, postgress setup.

It's easy to keep a copy of the configuration file of the mail server
on the /home partition --- and it's easy to re-create. There are only
two userids, no ldap, no postgres, and the config for apache is
totally messed up on Debian anyway since they split up the config file
so that nobody can get an idea how it's configured.

Anyway, you can always have backups of /etc; it's not changing very
frequently like /home.

> Admittedly you could re create these from memory but, there are some
> things that you can't 

If you have data like that on /etc, you need a backup.

> > What I was wondering about is what the advantage is of partitioning
> > the disks and creating RAIDs from the partitions vs. creating a RAID
> > from whole disks and partitioning the RAID?
> 
> I have to admit I have evaluated partitioning + raid v's raid +
> partitioning, I think I would go with the previous, more system (old
> linux box, windows boxes, mac boxes ) understand partitions - where as
> not all OS understand raid + partitioning. And currently I don't see the
> advantage to raid + partitioning 

Hm, is it possible to read/use a partition/file system that is part of
a software-RAID without the RAID-software? In that case, I could see
how it can be an advantage to use partitions+RAID rather than
RAID+partitions. But even then, can the "other systems" you're listing
handle ext4fs? I still don't see the advantage of partitioning+RAID.

> I believe the complexity is not that high and the returns are worth it,
> I haven't lost any information that I have had protected in a long time.

Maybe that's because we made different experiences ... To give an
example: I've had disks disconnecting every now and then that were
part of a RAID. The two disks were partitioned, RAID-1s created from
the partitions. Every time a disk would lose contact, I had to
manually re-add all the partitions after I turned the computer off and
back on and the disk came back.

Since there were three partitions and three md devices involved, I
could have made a mistake each time I re-added the partitions to the
RAID by specifying the wrong partition or md device.

Now having only one md dev

Re: Debian+LVM+RAID

2009-07-10 Thread Pete Boyd
This is how I do it on Debian Etch:
RAID:
http://thegoldenear.org/toolbox/unices/server-setup-debian-etch.html#raid

LVM:
http://thegoldenear.org/toolbox/unices/server-setup-debian-etch.html#lvm

The same process  works fine for Debian Lenny (I just haven't gotten my
Lenny guide written yet) apart from the Debian Installer seems to be
broken with regard to setting up RAID:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=511452
But this bug is easily worked around simply by a reboot in the middle of
the installer.

Pete Boyd




-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-10 Thread thveillon.debian
Roger Leigh wrote:
> On Thu, Jul 09, 2009 at 10:45:08PM +0200, martin f krafft wrote:
>> also sprach thveillon.debian  
>> [2009.07.09.2215 +0200]:
>>> It is possible to boot from mdadm software raid1 with grub2, in Lenny
>>> and Squeeze. But I would worry about the lvm, I don't think this is as
>>> straightforward, maybe not even possible at this point (to be
>>> double-checked anyway).
>> grub2 can boot LVM just as well as it can boot RAID1 or RAID5.
> 
> Is this stable for production use, or still in the experimental
> stage?
> 
> 
Hi,
>From my little experience, I have been using it since before the Lenny
release (on Lenny testing), and I am now on Squeeze. I have been using
it too on Ubuntu Intrepid and Jaunty, only had a few problem on Hardy
(had to backport a newer version). All machines on some kind of raid,
some on ext4, but no lvm. Most machines are workstations rebooted daily,
 and have no separate /boot, everything is on raid. So yes it's quite
stable for me.
I found that even when there is a problem, it easier to quickly recover
without even leaving the grub2 shell-like environment, I like the
modularity of the /etc/grub.d/ templates.

I recently set grub2 up a Fedora11 machine, it's really not well
integrated in the system yet, and require some manual work, but after
that it just works (on ext4). Debian has done a great job integrating it.

Only down sides are:
_The lack of recovery live-cd that support grub2 out of the box (but a
live Ubuntu/Debian does the job).
_Some disk imaging tools (Clonezilla) default to (re)installing grub on
the imaged disk, you have to be careful and disable it.
_The "os-prober" helper package is working somewhat randomly for me, it
is only supposed to auto-detect other installed systems, so no big deal.
_I don't know how grub2 behaves outside of x86 machines, or with non dos
disk labels.
_There is no support currently for partition label in grub.cfg, I miss
that, but uuid are arguably more reliable anyway.

That's all for my little experience of grub2, I wouldn't go back at this
point, and can't complain about stability, especially on Debian.


Give it a try,

Tom


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread martin f krafft
also sprach Roger Leigh  [2009.07.10.0131 +0200]:
> > grub2 can boot LVM just as well as it can boot RAID1 or RAID5.
> 
> Is this stable for production use, or still in the experimental
> stage?

It's non-default in lenny still, but it works. That's all I can tell
you, sorry.

-- 
 .''`.   martin f. krafft   Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
"geld ist das brecheisen der macht."
 - friedrich nietzsche


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Debian+LVM+RAID

2009-07-09 Thread Roger Leigh
On Thu, Jul 09, 2009 at 10:45:08PM +0200, martin f krafft wrote:
> also sprach thveillon.debian  
> [2009.07.09.2215 +0200]:
> > It is possible to boot from mdadm software raid1 with grub2, in Lenny
> > and Squeeze. But I would worry about the lvm, I don't think this is as
> > straightforward, maybe not even possible at this point (to be
> > double-checked anyway).
> 
> grub2 can boot LVM just as well as it can boot RAID1 or RAID5.

Is this stable for production use, or still in the experimental
stage?


-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linux http://people.debian.org/~rleigh/
 `. `'   Printing on GNU/Linux?   http://gutenprint.sourceforge.net/
   `-GPG Public Key: 0x25BFB848   Please GPG sign your mail.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread Alex Samad

[snip]

> > > > what happens if there is something important swapped out and that drive
> > > > dies ?  I could understand not wanting to put it on lvm and then raid1.
> > > 
> > > When the swap partition quits working, the system might stop
> > > working. So the question is what's more important, increased
> > > reliability or faster swap.
> > 
> > I am not sure that raid1 is noticeable slower than raid0 (or jbod), raid

ie for example the old 2 * physical ram size = swap size. I have a
machine with 256G of ram I don't need 512G of swap space.

> > 5 maybe or any other parity raid

[snip]

> > 
> > Depends on how much you are going to spend on the controller and weather
> > or not you are going to have battery backed up cache - if you not you
> > might aswell go software raid (only talking raid1 here).
> > 
> > If you do spend the money and have multiple machines then you might as
> > well go for a san..
> 
> Maybe --- I need to learn more about SAN. I'll have to read up on it
> and find out what it can do.

being stuck to dedicated hardware and firmware. look at the large data
center building all moving towards white box 1ru boxes - generic
hardware keeping it simple.  Can you take your smartraid (HP raid
controller) and attach it to a dell perc controlller - if you use
software raid then yes you can.

> 

[snip]

> > > involved and then creating RAIDs from the partitions? It's more work
> > > when setting it up, and it can turn into a disaster when something
> > > with discovering the RAID goes wrong.
> > 
> > sfdisk -d  > raidlayout.out
document it and have a change management doco/protocol. At some point in
time you have to trust something!

> > 

[snip]

> 
> I told it to start the RAID *when booting*, not any time before ---
> and I didn't boot. I wanted to install the raid tools and then make
> sure that the configuration was ok, and only after that I would have
> started the md devices. But they were started immediately before I
> could do anything, they didn't wait for a reboot.
> 
> What would you expect when you're being asked "Should X be done when
> booting?"? When I say "Yes", I expect that X will be done when
> booting, not that it will be done immediately.

my apologies I miss read

> 

[snip]

> > And this why we have backups
> 
> I didn't have a backup. I used to have tape drives, but with the
> amount of data to backup steadily increasing with the disk sizes, you
> get to the point where that gets too expensive and where there isn't
> any affordable and good solution. You can't buy tape drives and tapes
> that fast ... I still don't have a backup solution. I'm making backups
> on disks now, but that isn't a good solution, only a little better
> than no backup.

comes down to how much you value your data.  My home server has 10 x 1T
drives in it a mix of raid1 + raid5 + raid6, I have a second server with
9 x 1T drives in it (in the garage) to do my backups - because it would
take too long to send off site and I don't want to spend money on a tape
system - i value my data - well I could afford to throw money at the
problem.  But I have some important info there, photos & video of my
daughter etc ...

> 

[snip]

> 
> I'm not afraid of that. But the point remains that having less md
> devices reduces the chances that something goes wrong with their
> discovery. The point remains that reducing the complexity of a setup
> makes it easier to handle it (unless you reduced the complexity too
> much).

Ok to take this analogy even further why have 1T drives, why not stick
with 1G hard drives - less data less chance of errors.

If you are building a large system or !!complex!! system, bit of
planning before hand, I set mine up and haven't had a problem with md, I
have lost some drives during the life of this server - the hardest thing
is matching drive letter to physical drive - I didn't attach them in
incremental order to the mother board (silly me)

> 
> > > To me, it seems easier to only have one md device and to partition
> > > that, if needed, than doing it the other way round. However, I went
> > > the easiest way in that I have another disk with everything on it but
> > > /home. If that disk fails, nothing is lost, and if there are problems,
> > 
> > well except for /etc/
> 
> There's nothing on /etc that isn't replaceable. It's nice not to lose
> it, but it doesn't really matter. If I lost my data of the last 15
> years, I would have a few problems --- not unsolvable ones, I guess,
> but it would be utterly inconvenient. Besides that, a lot of that data
> is irreplaceable. That's what I call I a loss. Considering that, who
> cares about /etc?

really what about all your certificates in /etc/ssl, or your machines
ssh keys, or all that configuration information for your system mail,
ldap, userids, passwords, apache setup, postgress setup.

Admittedly you could re create these from memory but, there are some
things that you can't 

> 
> > > a single disk is the simplest to deal with. --

Re: Debian+LVM+RAID

2009-07-09 Thread martin f krafft
also sprach thveillon.debian  [2009.07.09.2215 
+0200]:
> It is possible to boot from mdadm software raid1 with grub2, in Lenny
> and Squeeze. But I would worry about the lvm, I don't think this is as
> straightforward, maybe not even possible at this point (to be
> double-checked anyway).

grub2 can boot LVM just as well as it can boot RAID1 or RAID5.

-- 
 .''`.   martin f. krafft   Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
"es ist immer etwas wahnsinn in der liebe.
 es ist aber auch immer etwas vernunft im wahnsinn."
 - friedrich nietzsche


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Debian+LVM+RAID

2009-07-09 Thread martin f krafft
also sprach lee  [2009.07.09.2204 +0200]:
>-a, --auto{=no,yes,md,mdp,part,p}{NN}
>   Instruct mdadm to create the device file if needed,
>   possibly allocating an unused minor number.  "md" causes
>   a non-partitionable array to be used.  "mdp", "part" or
>   "p" causes a partitionable array (2.6 and later) to be
>   used.
> "
> 
> You can still decide if you want a partitionable or non-partitionable
> RAID, thus not all RAIDs are partitionable since kernel
> 2.6.29. Unfortunately, the man page doesn't seem to say what the
> default is for the partitionability of the RAID.

mdadm has, uh, conservative maintenance. mdp is no longer needed.
"non-partitionable" arrays will be partitionable with newer kernels.

-- 
 .''`.   martin f. krafft   Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
"truth is stranger than fiction, but it is because
 fiction is obliged to stick to possibilities; truth isnt."
   -- mark twain


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Debian+LVM+RAID

2009-07-09 Thread thveillon.debian
lee wrote:
> On Thu, Jul 09, 2009 at 07:47:36PM +0100, Roger Leigh wrote:
> 
>>In the partitioner, set /dev/sda1 as /boot.  /boot needs to be
>>separate from the RAID+LVM setup in order to be accessible by the
>>bootloader, though it's possible grub2 will fix this at some point.
>>Keeping it separate is safe and recommended.
> 
> Are you saying it's impossible to install on (boot from) a software
> RAID?
> 
> 

It is possible to boot from mdadm software raid1 with grub2, in Lenny
and Squeeze. But I would worry about the lvm, I don't think this is as
straightforward, maybe not even possible at this point (to be
double-checked anyway).

Tom


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread lee
On Thu, Jul 09, 2009 at 07:47:36PM +0100, Roger Leigh wrote:

>In the partitioner, set /dev/sda1 as /boot.  /boot needs to be
>separate from the RAID+LVM setup in order to be accessible by the
>bootloader, though it's possible grub2 will fix this at some point.
>Keeping it separate is safe and recommended.

Are you saying it's impossible to install on (boot from) a software
RAID?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread lee
On Thu, Jul 09, 2009 at 11:42:16AM +0200, martin f krafft wrote:
> also sprach lee  [2009.07.09.0707 +0200]:
> > Why do you need LVM?
> 
> LVM offers features that RAID does not. If you want those features,
> you need LVM.

Yeah, but that wasn't what I was asking. I tried to find out what
features he needed and what he's trying to do.

> > The RAID array must be partitionable, which is an option you
> > eventually need to specify when creating it. I don't know what the
> [...]
> > To clarify: There are partitionable RAID arrays and
> > non-partitionable RAID arrays. When creating a RAID array, you
> > need to specify which kind --- partitionable or non-partitionable
> > --- you want to create.
> 
> Since 2.6.29, all RAIDs are partitionable. Not in lenny though.

See man madadm:


"
   -a, --auto{=no,yes,md,mdp,part,p}{NN}
  Instruct mdadm to create the device file if needed,
  possibly allocating an unused minor number.  "md" causes
  a non-partitionable array to be used.  "mdp", "part" or
  "p" causes a partitionable array (2.6 and later) to be
  used.
"

You can still decide if you want a partitionable or non-partitionable
RAID, thus not all RAIDs are partitionable since kernel
2.6.29. Unfortunately, the man page doesn't seem to say what the
default is for the partitionability of the RAID.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread lee
On Thu, Jul 09, 2009 at 08:43:13PM +1000, Alex Samad wrote:
> On Thu, Jul 09, 2009 at 02:19:44AM -0600, lee wrote:
> > On Thu, Jul 09, 2009 at 03:33:01PM +1000, Alex Samad wrote:
> > 
> > > > Creating a swap partition on a software RAID device isn't ideal.
> > > 
> > > what happens if there is something important swapped out and that drive
> > > dies ?  I could understand not wanting to put it on lvm and then raid1.
> > 
> > When the swap partition quits working, the system might stop
> > working. So the question is what's more important, increased
> > reliability or faster swap.
> 
> I am not sure that raid1 is noticeable slower than raid0 (or jbod), raid
> 5 maybe or any other parity raid

Maybe you're right ... When you have all partitions on a RAID to
improve reliability, it doesn't make sense to make an exception for
swap partitions. And RAM isn't as much an issue as it used to be
because the prices have come down so much that it is affordable to
have so much RAM that swapping rarely occurs.

> > Personally, I'd feel awkward about having swap partitions on
> > a software RAID, but when setting up something like a server to
> > provide important services for a company, I would insist on using a
> > good hardware RAID controller and likely put the swap partition onto
> > the RAID.
> 
> Depends on how much you are going to spend on the controller and weather
> or not you are going to have battery backed up cache - if you not you
> might aswell go software raid (only talking raid1 here).
> 
> If you do spend the money and have multiple machines then you might as
> well go for a san..

Maybe --- I need to learn more about SAN. I'll have to read up on it
and find out what it can do.

> I would suggest for most commercial situations a software raid setup or
> 2 raid 1 disk is a far better solution that a proprietary hardware raid
> controller

Hm, interesting. What makes you think so? Getting a good balance
between reliability and cost?

> > > I have had to rescue machines and having a simple boot + / make like a
> > > lot simpler  - why make life hard
> > 
> > Aren't you making it hard by having to partition all the disks
> > involved and then creating RAIDs from the partitions? It's more work
> > when setting it up, and it can turn into a disaster when something
> > with discovering the RAID goes wrong.
> 
> sfdisk -d  > raidlayout.out
> 
> sfdisk  < raidlayout.out
> 
> you could wrap it inside a for loop if you want 

I wouldn't do that. I don't have that much trust into software and
hardware.

> > For example, when I put the disks containing my RAID-1 into another
> > computer and installed the raid tools, I was asked questions about
> > starting the raid and answered that I wanted to start the RAID arrays
> > when booting. I had the disks partitioned and had created RAID arrays
> > from the partitions.
> > 
> > The result was that the RAID was started immediately (which I consider
> > as a bug) instead when booting, before I had any chance to check and
> 
> but you said above you gave the okay to start all raid devices, so why
> complain when it does it ?

I told it to start the RAID *when booting*, not any time before ---
and I didn't boot. I wanted to install the raid tools and then make
sure that the configuration was ok, and only after that I would have
started the md devices. But they were started immediately before I
could do anything, they didn't wait for a reboot.

What would you expect when you're being asked "Should X be done when
booting?"? When I say "Yes", I expect that X will be done when
booting, not that it will be done immediately.

You could say that in that case, I trusted the software too
much. Never do that ...

> > to configure the RAID arrays correctly so that they would be detected
> > as they should. It started resyncing the md devices in a weird way. I
> > was only lucky that it didn't go wrong. If it had gone wrong, I could
> > have lost all my data.
> 
> And this why we have backups

I didn't have a backup. I used to have tape drives, but with the
amount of data to backup steadily increasing with the disk sizes, you
get to the point where that gets too expensive and where there isn't
any affordable and good solution. You can't buy tape drives and tapes
that fast ... I still don't have a backup solution. I'm making backups
on disks now, but that isn't a good solution, only a little better
than no backup.

> > Now when I got new disks, I created the RAID arrays from the whole
> > disks. In this case, I didn't partition the RAID array, but even if I
> > did, the number of md devices was reduced from the three I had before
> > to only one. The lower the number of md devices you have, the less
> > likely it seems that something can go wrong with discovering them,
> > simply because there aren't so many.
> 
> I don't think you are going to have overflow problems with number of
> raid devices

I'm not afraid of that. But the point remains that having less md
devices reduces the 

Re: Debian+LVM+RAID

2009-07-09 Thread Roger Leigh
On Thu, Jul 09, 2009 at 10:34:13AM +0700, Vilasith Phonepadith wrote:
> 
> I am trying with LVM+RAID, and I did some tests. I need your help for the 
> installation of Lenny.
> 
> Problem: I have to install Debian with the requirements as follow:partition: 
> use the system of LVM for partitionning the serversif possible on the Disk 
> RAID1 Mirroring
> I don't understand well and don't know how to start. Which one should be done 
> first, after or at the same time.
> If we install by default, it's alright. But now, with LVM+RAID, it's 
> something different for me.

Make the LVM run on top of RAID.  This works perfectly, and I use it
on several systems.  It all works through the debian installer.

1) Partition disks

   Give both disks the same partition layout:
 p1: (/boot) 200MiB is plenty for several kernels and initrds.
 But you could make it 500MiB to be extra safe.
 p2: (RAID) use the rest of the available space

2) Set up software RAID

   Assuming your discs are /dev/sda and /dev/sdb, sda2 and sdb2 will
   be your RAID set.  Just choose them when setting up RAID.

3) Configure LVM

   Start by creating a new physical volume (PV).  Choose your raid
   device (/dev/md0) created in the previous step as the PV.

   Next, create a volume group (VG).  I normally give it the same
   name as the machine hostname.

   Then, create the logical volumes (LVs) for / (root), /usr, /var,
   /home, /srv and any other volumes you need.  Also create any
   swap partitions you like.  I normally create a series of 2GiB
   LVs swap0, swap1, ... swapn but I think the 2GiB size limit is gone
   now.

4) Set up the filesystems

   In the partitioner, set /dev/sda1 as /boot.  /boot needs to be
   separate from the RAID+LVM setup in order to be accessible by the
   bootloader, though it's possible grub2 will fix this at some point.
   Keeping it separate is safe and recommended.

   Configure all of your LVs by choosing the filesystem type and mount
   point.  For Lenny I'd go with ext2 for /boot and ext3 for everything
   else.

5) Continue

   It's all done now, just carry on with the installation as usual,
   and it should all Just Work.


Regards,
Roger
 
-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linux http://people.debian.org/~rleigh/
 `. `'   Printing on GNU/Linux?   http://gutenprint.sourceforge.net/
   `-GPG Public Key: 0x25BFB848   Please GPG sign your mail.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread Alex Samad
On Thu, Jul 09, 2009 at 02:19:44AM -0600, lee wrote:
> On Thu, Jul 09, 2009 at 03:33:01PM +1000, Alex Samad wrote:
> 
> > > Creating a swap partition on a software RAID device isn't ideal.
> > 
> > what happens if there is something important swapped out and that drive
> > dies ?  I could understand not wanting to put it on lvm and then raid1.
> 
> When the swap partition quits working, the system might stop
> working. So the question is what's more important, increased
> reliability or faster swap.

I am not sure that raid1 is noticeable slower than raid0 (or jbod), raid
5 maybe or any other parity raid


> 
> Personally, I'd feel awkward about having swap partitions on
> a software RAID, but when setting up something like a server to
> provide important services for a company, I would insist on using a
> good hardware RAID controller and likely put the swap partition onto
> the RAID.

Depends on how much you are going to spend on the controller and weather
or not you are going to have battery backed up cache - if you not you
might aswell go software raid (only talking raid1 here).

If you do spend the money and have multiple machines then you might as
well go for a san..

I would suggest for most commercial situations a software raid setup or
2 raid 1 disk is a far better solution that a proprietary hardware raid
controller

> 
> > > BTW, wouldn't you rather create a partitionable RAID from the whole
> > > disks and then partition that? If not, why not? (Letting aside where
> > > to put the swap partition ...)
> > 
> > I have had to rescue machines and having a simple boot + / make like a
> > lot simpler  - why make life hard
> 
> Aren't you making it hard by having to partition all the disks
> involved and then creating RAIDs from the partitions? It's more work
> when setting it up, and it can turn into a disaster when something
> with discovering the RAID goes wrong.



sfdisk -d  > raidlayout.out

sfdisk  < raidlayout.out

you could wrap it inside a for loop if you want 

> 
> For example, when I put the disks containing my RAID-1 into another
> computer and installed the raid tools, I was asked questions about
> starting the raid and answered that I wanted to start the RAID arrays
> when booting. I had the disks partitioned and had created RAID arrays
> from the partitions.
> 
> The result was that the RAID was started immediately (which I consider
> as a bug) instead when booting, before I had any chance to check and

but you said above you gave the okay to start all raid devices, so why
complain when it does it ?

> to configure the RAID arrays correctly so that they would be detected
> as they should. It started resyncing the md devices in a weird way. I
> was only lucky that it didn't go wrong. If it had gone wrong, I could
> have lost all my data.

And this why we have backups

> 
> Now when I got new disks, I created the RAID arrays from the whole
> disks. In this case, I didn't partition the RAID array, but even if I
> did, the number of md devices was reduced from the three I had before
> to only one. The lower the number of md devices you have, the less
> likely it seems that something can go wrong with discovering them,
> simply because there aren't so many.

I don't think you are going to have overflow problems with number of
raid devices

> 
> To me, it seems easier to only have one md device and to partition
> that, if needed, than doing it the other way round. However, I went
> the easiest way in that I have another disk with everything on it but
> /home. If that disk fails, nothing is lost, and if there are problems,

well except for /etc/

> a single disk is the simplest to deal with. --- I might have done it
> otherwise, but it has been impossible to install on SATA disks because
> the modules required to access SATA disks are not available to the

if you have a look at the latest installer i think you will find it has
all the necessary modules now 

> installer. Maybe that has been fixed by now; if it hasn't, it really
> should be fixed.

I was suggesting to put / on its own partition as well as /boot. boot I
do out of habit from long time ago, with busybox you can access the
system even if the other partition as corrupted and still try and
salvage stuff 


The suggest I have made are for reducing risk, the gain made by having a
separate root and boot  in my mind a re worth it.

In a production environment you have change management procedures or
at least some documentation.

> 
> 
> In which way having many md devices made it easier for you to perform
> rescue operations? Maybe there are advantages I'm not thinking of but
> which would be good to know. I want to get rid of that IDE disk and
> might have a chance to, so I'm going to have to decide if I want to
> install on a RAID. If it's better to partition the disks rather than
> the RAID, I should do it that way.

You have missed the point the advantage is have a separate / and a
separate /boot to protect them you can even mo

Re: Debian+LVM+RAID

2009-07-09 Thread martin f krafft
also sprach lee  [2009.07.09.0707 +0200]:
> Why do you need LVM?

LVM offers features that RAID does not. If you want those features,
you need LVM.

> The RAID array must be partitionable, which is an option you
> eventually need to specify when creating it. I don't know what the
[...]
> To clarify: There are partitionable RAID arrays and
> non-partitionable RAID arrays. When creating a RAID array, you
> need to specify which kind --- partitionable or non-partitionable
> --- you want to create.

Since 2.6.29, all RAIDs are partitionable. Not in lenny though.

-- 
 .''`.   martin f. krafft   Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
"driving with a destination
 is like having sex to have children"
 -- backwater wayne miller


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Debian+LVM+RAID

2009-07-09 Thread lee
On Thu, Jul 09, 2009 at 03:33:01PM +1000, Alex Samad wrote:

> > Creating a swap partition on a software RAID device isn't ideal.
> 
> what happens if there is something important swapped out and that drive
> dies ?  I could understand not wanting to put it on lvm and then raid1.

When the swap partition quits working, the system might stop
working. So the question is what's more important, increased
reliability or faster swap.

Personally, I'd feel awkward about having swap partitions on
a software RAID, but when setting up something like a server to
provide important services for a company, I would insist on using a
good hardware RAID controller and likely put the swap partition onto
the RAID.

> > BTW, wouldn't you rather create a partitionable RAID from the whole
> > disks and then partition that? If not, why not? (Letting aside where
> > to put the swap partition ...)
> 
> I have had to rescue machines and having a simple boot + / make like a
> lot simpler  - why make life hard

Aren't you making it hard by having to partition all the disks
involved and then creating RAIDs from the partitions? It's more work
when setting it up, and it can turn into a disaster when something
with discovering the RAID goes wrong.

For example, when I put the disks containing my RAID-1 into another
computer and installed the raid tools, I was asked questions about
starting the raid and answered that I wanted to start the RAID arrays
when booting. I had the disks partitioned and had created RAID arrays
from the partitions.

The result was that the RAID was started immediately (which I consider
as a bug) instead when booting, before I had any chance to check and
to configure the RAID arrays correctly so that they would be detected
as they should. It started resyncing the md devices in a weird way. I
was only lucky that it didn't go wrong. If it had gone wrong, I could
have lost all my data.

Now when I got new disks, I created the RAID arrays from the whole
disks. In this case, I didn't partition the RAID array, but even if I
did, the number of md devices was reduced from the three I had before
to only one. The lower the number of md devices you have, the less
likely it seems that something can go wrong with discovering them,
simply because there aren't so many.

To me, it seems easier to only have one md device and to partition
that, if needed, than doing it the other way round. However, I went
the easiest way in that I have another disk with everything on it but
/home. If that disk fails, nothing is lost, and if there are problems,
a single disk is the simplest to deal with. --- I might have done it
otherwise, but it has been impossible to install on SATA disks because
the modules required to access SATA disks are not available to the
installer. Maybe that has been fixed by now; if it hasn't, it really
should be fixed.


In which way having many md devices made it easier for you to perform
rescue operations? Maybe there are advantages I'm not thinking of but
which would be good to know. I want to get rid of that IDE disk and
might have a chance to, so I'm going to have to decide if I want to
install on a RAID. If it's better to partition the disks rather than
the RAID, I should do it that way.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread linuksos
Hi,

here are some LVM basics to get you started..

http://www.linuxconfig.org/Linux_lvm_-_Logical_Volume_Manager

lubos

On Thu, Jul 9, 2009 at 5:31 PM, Serge van
Ginderachter wrote:
> 2009/7/9 Alex Samad :
>
>>> Creating a swap partition on a software RAID device isn't ideal. It is
>>> better to create a swap partition on each of the physical devices and
>>> give them the same priority (in /etc/fstab). That's only one example,
>>> you could also use a disk that isn't part of the RAID and have only
>>> one swap partition ...
>>
>> what happens if there is something important swapped out and that drive
>> dies ?  I could understand not wanting to put it on lvm and then raid1.
>
> I agree with that. One generally uses RAID to keep a host running when
> a disk failure occurs. If swap is not mirrored, processes wil crash,
> or worst case the box might crash.
>
> If performance is an issue, then you'd better go hardware raid.
>
> Unless someone knows of another strong reason not to put swap on raid?
>
> --
>
>
>     Met vriendelijke groet,
>
>     Serge van Ginderachter
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>
>



-- 
lubo
http://www.linuxconfig.org/


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-09 Thread Serge van Ginderachter
2009/7/9 Alex Samad :

>> Creating a swap partition on a software RAID device isn't ideal. It is
>> better to create a swap partition on each of the physical devices and
>> give them the same priority (in /etc/fstab). That's only one example,
>> you could also use a disk that isn't part of the RAID and have only
>> one swap partition ...
>
> what happens if there is something important swapped out and that drive
> dies ?  I could understand not wanting to put it on lvm and then raid1.

I agree with that. One generally uses RAID to keep a host running when
a disk failure occurs. If swap is not mirrored, processes wil crash,
or worst case the box might crash.

If performance is an issue, then you'd better go hardware raid.

Unless someone knows of another strong reason not to put swap on raid?

-- 


 Met vriendelijke groet,

 Serge van Ginderachter


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-08 Thread Alex Samad
On Wed, Jul 08, 2009 at 11:16:04PM -0600, lee wrote:
> On Thu, Jul 09, 2009 at 02:13:25PM +1000, Alex Samad wrote:
> 
> > md0 = sda1 + sdb1
> > md1 = sda2 + sdb2
> > md3 = sda3 + sdb3
> > 
> > md0 = /boot (ext2)
> > md1 = / (ext3)
> > md2 = lvm physical device 
> > 
> > Then on LVM
> > 
> > size of memory = swap partition
> > 1G = ext3  mount point /var/log
> > 
> > Then chop up the rest lvm for /home or ???
> 
> Creating a swap partition on a software RAID device isn't ideal. It is
> better to create a swap partition on each of the physical devices and
> give them the same priority (in /etc/fstab). That's only one example,
> you could also use a disk that isn't part of the RAID and have only
> one swap partition ...

what happens if there is something important swapped out and that drive
dies ?  I could understand not wanting to put it on lvm and then raid1.


> 
> BTW, wouldn't you rather create a partitionable RAID from the whole
> disks and then partition that? If not, why not? (Letting aside where
> to put the swap partition ...)

I have had to rescue machines and having a simple boot + / make like a
lot simpler  - why make life hard

> 
> 

-- 
"We don't believe in planners and deciders making the decisions on behalf of 
Americans."

- George W. Bush
09/06/2000
Scranton, PA


signature.asc
Description: Digital signature


Re: Debian+LVM+RAID

2009-07-08 Thread lee
On Thu, Jul 09, 2009 at 02:13:25PM +1000, Alex Samad wrote:

> md0 = sda1 + sdb1
> md1 = sda2 + sdb2
> md3 = sda3 + sdb3
> 
> md0 = /boot (ext2)
> md1 = / (ext3)
> md2 = lvm physical device 
> 
> Then on LVM
> 
> size of memory = swap partition
> 1G = ext3  mount point /var/log
> 
> Then chop up the rest lvm for /home or ???

Creating a swap partition on a software RAID device isn't ideal. It is
better to create a swap partition on each of the physical devices and
give them the same priority (in /etc/fstab). That's only one example,
you could also use a disk that isn't part of the RAID and have only
one swap partition ...

BTW, wouldn't you rather create a partitionable RAID from the whole
disks and then partition that? If not, why not? (Letting aside where
to put the swap partition ...)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-08 Thread lee
On Thu, Jul 09, 2009 at 10:34:13AM +0700, Vilasith Phonepadith wrote:

> Problem: I have to install Debian with the requirements as
> follow:partition: use the system of LVM for partitionning the
> serversif possible on the Disk RAID1 Mirroring I don't understand
> well and don't know how to start. Which one should be done first,
> after or at the same time.

Why do you need LVM?

Imho it is better to create a partitionable RAID array first and then
to create the partitions you want to have on the partitionable RAID
array.

The RAID array must be partitionable, which is an option you
eventually need to specify when creating it. I don't know what the
options the installer offers are doing by default, but I would think
that you should be able to open the shell and create the RAID array
with mdadm manually. Once you have the partitionable RAID array, you
should be able to partition it from within the installer like any
other disk.


To clarify: There are partitionable RAID arrays and non-partitionable
RAID arrays. When creating a RAID array, you need to specify which
kind --- partitionable or non-partitionable --- you want to create.

If you want to use non-partitionable RAID arrays, you would create
identical partitions on all of the disks involved and then use those
partitions to create the RAID arrays. Another poster already described
that better.

If you want to use a partitionable RAID array, you can use whole disks
(without partitioning the disks) to create the RAID array. Then you
can partition the RAID array.


As to LVM, I can't say much since I haven't used it yet. And without
knowing more about what you are trying to achieve and why you might
need LVM, there's not much advice that could be given.

If you can avoid using LVM, don't use it. It's always better to keep
things simple. If you want to use LVM because you might want to add
more disks later to make the partitions larger, it would make sense to
me to first create a partitionable RAID array and then to create LVM
partitions as needed on the RAID array. When you're adding more disks
later, you would again create a partitionable RAID array from the
additional disks and use the array with LVM.

But there are other possible uses of LVM which may suggest a different
approach ...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: Debian+LVM+RAID

2009-07-08 Thread Alex Samad
On Thu, Jul 09, 2009 at 10:34:13AM +0700, Vilasith Phonepadith wrote:
> 
> Hello,
> 
> I am trying with LVM+RAID, and I did some tests. I need your help for the 
> installation of Lenny.
> 
> Problem: I have to install Debian with the requirements as follow:partition: 
> use the system of LVM for partitionning the serversif possible on the Disk 
> RAID1 Mirroring
> I don't understand well and don't know how to start. Which one should be done 
> first, after or at the same time.
> If we install by default, it's alright. But now, with LVM+RAID, it's 
> something different for me.

I would suggest (presume you have sda and sdb and they are the same
size)

I would partition up sda and sdb like

sda
p1 - 1G
p2 - 20G
p3 - rest

sdb - the same as sda


make 

md0 = sda1 + sdb1
md1 = sda2 + sdb2
md3 = sda3 + sdb3

md0 = /boot (ext2)
md1 = / (ext3)
md2 = lvm physical device 

Then on LVM

size of memory = swap partition
1G = ext3  mount point /var/log

Then chop up the rest lvm for /home or ???

Alex


> 
> Thank you very much and good day.
> 
> Vilasith
> 
> _
> Windows Live™: Keep your life in sync. Check it out!
> http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t1_allup_explore_012009
-- 
"More and more of our imports are coming from overseas."

- George W. Bush
09/26/2005
On NPR's Morning Edition


signature.asc
Description: Digital signature


Debian+LVM+RAID

2009-07-08 Thread Vilasith Phonepadith












Hello,

I am trying with LVM+RAID, and I did some tests. I need your help for the 
installation of Lenny.

Problem: I have to install Debian with the requirements as follow:partition: 
use the system of LVM for partitionning the serversif possible on the Disk 
RAID1 Mirroring
I don't understand well and don't know how to start. Which one should be done 
first, after or at the same time.
If we install by default, it's alright. But now, with LVM+RAID, it's something 
different for me.

Thank you very much and good day.

Vilasith

_
Windows Live™: Keep your life in sync. Check it out!
http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t1_allup_explore_012009