Re: [zfs-discuss] Complete Linux Noob

2010-06-17 Thread Erik Trimble

On 6/16/2010 6:55 AM, Scott Kaelin wrote:



On Wed, Jun 16, 2010 at 6:03 AM, Orvar Korvar 
> wrote:


"You can't expand a normal RAID, either, anywhere I've ever seen."
Is this true?

Depending on the software/hardware you used this is not true. Highend 
HW raid controller support capacity expansion (adding a new drive to 
an existing array). There is also an experimental feature (atleast it 
was when i used it) of mdadm (linux soft raid) which allows you to do 
the same thing (i think they call it modifying the geometry of the 
array in their docs) but you need certain kernel features turned on.


--
Scott Kaelin
0x6BE43783


Actually, most modern SCSI and SAS raid controllers support adding a 
disk to an existing RAID3/4/5/6 configuration. I know my IBM ServeRAID 
controllers do, and they've been doing it since, well, the stone age (or 
at least the mid-90s).  IIRC, you can do it to it in Solaris Volume 
Manager (Disksuite), and even in VxVM supports it, though both do it 
imperfectly (or, perhaps, I should say sub-optimally).


While it's a much simpler task to implement in hardware than in 
software, it's more difficult than normal for something like ZFS. Not to 
say that we *really* could have the BP rewrite stuff finished sometime 
soon... 


:-)

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Complete Linux Noob

2010-06-16 Thread Freddie Cash
On Wed, Jun 16, 2010 at 3:03 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> "You can't expand a normal RAID, either, anywhere I've ever seen."
> Is this true?
>
> A "vdev" can be a group of discs configured as raidz1/mirror/etc. An zfs
> raid consists of several vdev. You can add a new vdev whenever you want.
>

Close.

A vdev consists of one or more disks configured as stand-alone (no
redundancy), mirror, or raidz (along with a couple of special duty vdevs:
cache, log, or spare).

A pool consists of multiple vdevs, preferably of the same configuration (all
mirrors, all raidz1, all raidz2, etc).

You can add vdevs to the pool at anytime.

You cannot expand a raidz vdev by adding drives, though (convert a 4-drive
raidz1 to a 5-drive raidz1). Nor can you convert between raidz types
(4-drive raidz1 to a 6-drive raidz2).

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Complete Linux Noob

2010-06-16 Thread Scott Kaelin
On Wed, Jun 16, 2010 at 6:03 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> "You can't expand a normal RAID, either, anywhere I've ever seen."
> Is this true?
>
> Depending on the software/hardware you used this is not true. Highend HW
raid controller support capacity expansion (adding a new drive to an
existing array). There is also an experimental feature (atleast it was when
i used it) of mdadm (linux soft raid) which allows you to do the same thing
(i think they call it modifying the geometry of the array in their docs) but
you need certain kernel features turned on.

-- 
Scott Kaelin
0x6BE43783
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Complete Linux Noob

2010-06-16 Thread Orvar Korvar
"You can't expand a normal RAID, either, anywhere I've ever seen."
Is this true?



A "vdev" can be a group of discs configured as raidz1/mirror/etc. An zfs raid 
consists of several vdev. You can add a new vdev whenever you want.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Complete Linux Noob

2010-06-15 Thread David Dyer-Bennet

On Tue, June 15, 2010 14:13, CarlPalmer wrote:
> I have been researching different types of raids, and I happened across
> raidz, and I am blown away.  I have been trying to find resources to
> answer some of my questions, but many of them are either over my head in
> terms of details, or foreign to me as I am a linux noob, and I have to
> admit I have never even looked at Solaris.

Heh; caught another one :-) .

> Are the Parity drives just that, a drive assigned to parity, or is the
> parity shared over several drives?

No drives are formally designated for "parity"; all n drives in the RAIDZ
vdev are used together in such a way that you can lose one drive without
loss of data, but exactly which bits are "data" and which bits are
"parity" and where they are stored is not something the admin has to think
about or know (and in fact cannot know).

> I understand that you can build a raidz2 that will have 2 parity disks.
> So in theory I could lose 2 disks and still rebuild my array so long as
> they are not both the parity disks correct?

Any two disks out of a raidz2 vdev can be lost.  Lose a third before the
recover completes and your data is toast.

> I understand that you can have Spares assigned to the raid, so that if a
> drive fails, it will immediately grab the spare and rebuild the damaged
> drive.  Is this correct?

Yes, RAIDZ (including z2 and z3) and mirror vdevs will grab a "hot spare"
if one is assigned and needed, and start the resilvering operation
immediately.

> Now I can not find anything on how much space is taken up in the raidz1 or
> raidz2.  If all the drives are the same size, does a raidz2 take up the
> space of 2 of the drives for parity, or is the space calculation
> different?

That's the right calculation.

> I get that you can not expand a raidz as you would a normal raid, by
> simply slapping on a drive.  Instead it seems that the preferred method is
> to create a new raidz.  Now Lets say that I want to add another raidz1 to
> my system, can I get the OS to present this as one big drive with the
> space from both raid pools?

You can't expand a normal RAID, either, anywhere I've ever seen.

A "pool" can contain multiple "vdevs".  You can add additional vdevs to a
pool and the new space become immediately available to the pool, and hence
to anything (like a filesystem) drawing from that pool.

(The zpool command will attempt to stop you from mixing vdevs of different
redundancy in the same pool, but you can force it to let you.  Mixing a
RAIDZ vdev and a RAIDZ3 vdev in the same pool is a silly thing to do,
since you don't control where in the pool any new data goes, and it's
likely to be striped across the vdevs in the pool.)

You can also replace all the drives in a vdev, serially (and waiting for
the resilver to complete at each step before continuing to the next
drive), and if the new drives are larger than the old drives, when  you've
replaced all of them the new space will be usable in that vdev.  This is
particularly useful with mirrors, where there are only two drives to
replace.

(Well, actually, ZFS mirrors can have any number of drives.  To avoid the
risk of loss when upgrading the drives in a mirror, attach the new bigger
drive FIRST, wait for the resilver, and THEN detach one of the smaller
original drives, repeat for the second drive, and you will never go to a
redundancy lower than 2.  You can even attach BOTH new disks at once, if
you have the slots and controller space, and have a 4-way  mirror for a
while.  Somebody reported configuring ALL the drives in a 'Thumper' as a
mirror, a 48-way mirror, just to see if it worked.  It did.)

> How do I share these types of raid pools across the network.  Or more
> specifically, how do I access them from Windows based systems?  Is there
> any special trick?

Nothing special.  In-kernel CIFS is better than SAMBA, and supports full
NTFS ACLs.  I hear it also attaches to AD cleanly, but I haven't done
that, don't run AD at home.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Complete Linux Noob

2010-06-15 Thread Roy Sigurd Karlsbakk

> How do I share these types of raid pools across the network. Or more
> specifically, how do I access them from Windows based systems? Is
> there any special trick?

Most of your questions are answered here

http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Complete Linux Noob

2010-06-15 Thread Freddie Cash
Some of my terminology may not be 100% accurate, so apologies in advance to
the pedants on this list.  ;)

On Tue, Jun 15, 2010 at 12:13 PM, CarlPalmer  wrote:

> I have been researching different types of raids, and I happened across
> raidz, and I am blown away.  I have been trying to find resources to answer
> some of my questions, but many of them are either over my head in terms of
> details, or foreign to me as I am a linux noob, and I have to admit I have
> never even looked at Solaris.
>
> Are the Parity drives just that, a drive assigned to parity, or is the
> parity shared over several drives?
>

Separate parity drives are RAID3 setups.  raidz1 is similar to RAID5 in that
it uses distributed parity (parity blocks are written out to all the disks
as needed).  raidz2 is similar to RAID6.  raidz3 (triple-parity raid) is
similar to ... RAID7?  Don't think there's actually any formal RAID levels
above RAID6, is there?


> I understand that you can build a raidz2 that will have 2 parity disks.  So
> in theory I could lose 2 disks and still rebuild my array so long as they
> are not both the parity disks correct?
>

There are no "parity disks" in raidz.  With raidz2, you can lose any 2
drives in the vdev, without losing any data.  Lose a third drive, though,
and everything is gone.

With raidz3, you can lose any 3 drives in the vdev without losing any data.
 Lose a fourth drive, though, and everything is gone.


> I understand that you can have Spares assigned to the raid, so that if a
> drive fails, it will immediately grab the spare and rebuild the damaged
> drive.  Is this correct?
>

Depending on the version of ZFS being used, and whether or not you set the
property that controls this feature, yes.  Hot-spares will start rebuilding
a degraded vdev right away.


> Now I can not find anything on how much space is taken up in the raidz1 or
> raidz2.  If all the drives are the same size, does a raidz2 take up the
> space of 2 of the drives for parity, or is the space calculation different?
>

Correct.  raidz1 loses 1 drive worth of space to parity.  raidz2 loses 2
drives worth of space.  raidz3 loses 3 drives worth of space.


> I get that you can not expand a raidz as you would a normal raid, by simply
> slapping on a drive.  Instead it seems that the preferred method is to
> create a new raidz.  Now Lets say that I want to add another raidz1 to my
> system, can I get the OS to present this as one big drive with the space
> from both raid pools?
>

Yes.  That is the whole point of pooled storage.  :)  As you add vdevs to
the pool, the available space increases.  There's no partitioning required,
you just create ZFS filesystems and volumes as needed.


> How do I share these types of raid pools across the network.  Or more
> specifically, how do I access them from Windows based systems?  Is there any
> special trick?
>

The same way you access any harddrive over the network:
  - NFS
  - SMB/CIFS
  - iSCSI
  - etc

It just depends at what level you want to access the storage (files, shares,
block devices, etc).

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Complete Linux Noob

2010-06-15 Thread CarlPalmer
I have been researching different types of raids, and I happened across raidz, 
and I am blown away.  I have been trying to find resources to answer some of my 
questions, but many of them are either over my head in terms of details, or 
foreign to me as I am a linux noob, and I have to admit I have never even 
looked at Solaris.

Are the Parity drives just that, a drive assigned to parity, or is the parity 
shared over several drives?

I understand that you can build a raidz2 that will have 2 parity disks.  So in 
theory I could lose 2 disks and still rebuild my array so long as they are not 
both the parity disks correct?

I understand that you can have Spares assigned to the raid, so that if a drive 
fails, it will immediately grab the spare and rebuild the damaged drive.  Is 
this correct?

Now I can not find anything on how much space is taken up in the raidz1 or 
raidz2.  If all the drives are the same size, does a raidz2 take up the space 
of 2 of the drives for parity, or is the space calculation different?

I get that you can not expand a raidz as you would a normal raid, by simply 
slapping on a drive.  Instead it seems that the preferred method is to create a 
new raidz.  Now Lets say that I want to add another raidz1 to my system, can I 
get the OS to present this as one big drive with the space from both raid pools?

How do I share these types of raid pools across the network.  Or more 
specifically, how do I access them from Windows based systems?  Is there any 
special trick?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss