Re: [zfs-discuss] Accidentally mixed-up disks in RAIDZ

2009-11-15 Thread Leandro Vanden Bosch
Solved!

This what I did: boot with the disks unplugged, I took a 'digest -a md5 
/etc/zfs/zpool.cache', then I did 'zpool export -f data', I got an error 
message saying that the pool didn't exist. I checked again the MD5 against the 
zpool.cache and it effectively changed (so, although the export appeared to 
fail, it actually did something). I turned off the machine, plugged the disks 
back in and then I performed a 'zpool import' without any issue.
All the properties retain their previous settings.

Thanks to all for your tips.

Regards,

Leandro.


  Yahoo! Cocina

Encontra las mejores recetas con Yahoo! Cocina.


http://ar.mujer.yahoo.com/cocina/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread jay
I may be missing something here, but from the set up he is discribing his 
raid-z should be seeing 4 1tb drives.  Thus in theory he should be able to lose 
both 500gb drives and still recover since they are only viewed as a singe drive 
in the raid-z.  The main draw backs being performance, and lack of ability to 
fully manage the 500gb drives.  

Sent from my BlackBerry® smartphone with SprintSpeed

-Original Message-
From: Tim Cook 
Date: Sun, 15 Nov 2009 15:59:22 
To: Les Pritchard
Cc: 
Subject: Re: [zfs-discuss] Best config for different sized disks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Tim Cook
On Sun, Nov 15, 2009 at 1:19 PM, Les Pritchard wrote:

> Hi Bob,
>
> Thanks for the input. I've had a play and created a stripe of the two 500gb
> disks and then exported them as a volume. That was the key - I could then
> treat it as a regular device and add it with the other 3 disks to create a
> raidz pool of them all.
>
> Works very well and I'm sure the owner of the disks will be very happy to
> not spend more money! Thanks for the tip.
>
> Les
>
>
Once again I question why you're wasting your time with raid-z.  You might
as well just stripe across all the drives.  You're taking a performance
penalty for a setup that essentially has 0 redundancy.  You lose a 500gb
drive, you lose everything.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Brandon High
On Sun, Nov 15, 2009 at 10:39 AM, Orvar Korvar
 wrote:
> Yes that might be the cause. Thanks for identifying that. So I would gain 
> bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC 
> card, instead of having all drives on the AOC card.

Yup! The ICH10 is connected at 10Gb/sec to the northbridge, so it
shouldn't have bandwidth issues.

2 modern drives will be able to fully saturate the PCI bus. You could
get away with more however, since most activity isn't large sequential
reads. Things like scrubs (which are lots of sequential reads) will
take a more noticeable performance hit than everyday use.

-B

-- 
Brandon High : bh...@freaks.com
"God is big, so don't fuck with him."
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Les Pritchard
Hi Bob,

Thanks for the input. I've had a play and created a stripe of the two 500gb 
disks and then exported them as a volume. That was the key - I could then treat 
it as a regular device and add it with the other 3 disks to create a raidz pool 
of them all.

Works very well and I'm sure the owner of the disks will be very happy to not 
spend more money! Thanks for the tip.

Les
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Brandon High
On Sun, Nov 15, 2009 at 9:27 AM, Bob Friesenhahn
 wrote:
>> 3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could be put
>> into a stiped pool that could then be part of a 4 x1TB RAIDZ pool?
>
> I expect that you could use Solaris Volume Manager (DiskSuite) to stripe the
> 2x500GB disks into a larger device, which could then be used as a single
> device by zfs.

I wonder if a stripe or concat would be better for this use? If one
drive failed, you could possibly read 1/2 the blocks for resilvering
without waiting on a failed drive for every other block... Regardless,
you are twice as likely to like the SVM volume as a native 1TB drive.
Performance will probably be pretty good regardless of the type of SVM
volume you use.

There are a bunch of configurations you could use, depending on how
much risk tolerance you have and whether you plan on upgrading drives
later.

The best option to get the most space and best protection would be to
replace the 500GB drives with 1TB and do a 5x 1TB raidz.

Creating two vdevs with a 3x 1TB raidz and a 2x 500GB stripe in one
pool would give you 2.5TB of space and pretty good performance. This
is probably the safest way to use your different drive sizes.

You could also use mirrors for equally sized drives which would give
you 1.5TB usable. The 3rd 1TB would not have any redundancy, but if
you're comfortable with the risk, you could add it for 2.5TB. I would
not recommend it however. This option would probably give you the best
write performance, with or without the 3rd 1TB drive.

Another option is to partition the 1TB drives, then create a 5x 500GB
raidz pool and a second 3x 500GB pool. Two pools are not as flexible,
but you could get away with single parity raidz, since losing a drive
would only degrade one vdev per pool. Performance will probably suck
since you are forcing the drive to seek a lot, but only when accessing
both pools at the same time.

You could also do the same partitioning and vdevs, but put them in one
pool. You'd have the same fault tolerance as above, but one 3TB pool.
This has less flexibility for replacing the 500GB drives, at least
until vdev removal is available. performance would be slightly worse
than above, since the drives will be doing more seeks.

You could also partition your 1TB drives into 500GB pieces, then
create a raidz of the 8 x 500GB partitions. If you have available
ports and plan to upgrade or add devices in the near future, you can
then replace the 500GB partitions with native devices. You'd need to
do raidz2 (or higher) for protection, since losing one 1TB would be
equivalent to losing 2 drives. This would give you 3TB usable, but
until you replaced the partitions with real devices, you'd have less
protection than raidz2 would normally afford. You'd still be better
off replacing the 500GB drives and adding additional drives now and
avoid migration and rebuilds later.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Orvar Korvar
Yes that might be the cause. Thanks for identifying that. So I would gain 
bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC 
card, instead of having all drives on the AOC card.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk I/O in RAID-Z as new disks are added/removed

2009-11-15 Thread Brandon High
On Sun, Nov 8, 2009 at 12:05 AM, besson3c  wrote:
> Any general information you can provide me as far as the theoretical concepts 
> behind increasing I/O by adding disks to a RAID-Z pool would be appreciated 
> as I assess this technology :)

As with RAID5 and RAID6, a wider stripe will improve read performance
for raidz. Writes will generally be limited to the throughput of your
slowest device. On average, writes will still be faster than than
RAID5/6, since there is no read / re-write penalty for partial writes.

-B

-- 
Brandon High : bh...@freaks.com
If violence doesn't solve your problem, you're not using enough of it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Bob Friesenhahn

On Sun, 15 Nov 2009, Les Pritchard wrote:


Hi, just wondering if I can get any ideas on my situation. I've used ZFS a lot 
with equal sized disks and am extremely happy / amazed with what it can offer. 
However I've encountered a few people who want to use ZFS but have a bunch of 
different disks but still want the max size of usable space possible.

Take an example:
3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could 
be put into a stiped pool that could then be part of a 4 x1TB RAIDZ 
pool?


I expect that you could use Solaris Volume Manager (DiskSuite) to 
stripe the 2x500GB disks into a larger device, which could then be 
used as a single device by zfs.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Tim Cook
On Sun, Nov 15, 2009 at 9:25 AM, Les Pritchard wrote:

> Hi, just wondering if I can get any ideas on my situation. I've used ZFS a
> lot with equal sized disks and am extremely happy / amazed with what it can
> offer. However I've encountered a few people who want to use ZFS but have a
> bunch of different disks but still want the max size of usable space
> possible.
>
> Take an example:
> 3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could be put
> into a stiped pool that could then be part of a 4 x1TB RAIDZ pool?
>

Nope, not unless you used a hardware raid card.  Doing that would be a *bad
idea* anyways.  You'd basically be throwing away the entire reason for doing
raid-z as there would be no redundancy in the 500GB drive raidset.


>
> If they were all put in a RAIDZ pool as is, it would treat all the disks as
> 500GB and lose the rest of the space - is that correct?
>

Correct.


>
> I know that in this case they could go out and get a very cheap 1TB HDD to
> resolve this, but it's more the idea because I'm seeing lots of people with
> different disks who want to squeeze the most space possible out of them.
>

So have two raidsets.  One with the 1TB drives, and one with the 300's.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best config for different sized disks

2009-11-15 Thread Les Pritchard
Hi, just wondering if I can get any ideas on my situation. I've used ZFS a lot 
with equal sized disks and am extremely happy / amazed with what it can offer. 
However I've encountered a few people who want to use ZFS but have a bunch of 
different disks but still want the max size of usable space possible.

Take an example:
3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could be put into 
a stiped pool that could then be part of a 4 x1TB RAIDZ pool?

If they were all put in a RAIDZ pool as is, it would treat all the disks as 
500GB and lose the rest of the space - is that correct?

I know that in this case they could go out and get a very cheap 1TB HDD to 
resolve this, but it's more the idea because I'm seeing lots of people with 
different disks who want to squeeze the most space possible out of them.

Any ideas would be great!

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk I/O in RAID-Z as new disks are added/removed

2009-11-15 Thread Tim Cook
On Sun, Nov 15, 2009 at 2:57 AM, besson3c  wrote:

> Anybody?
>
> I would truly appreciate some general, if not definite insight as to what
> one can expect in terms of I/O performance after adding new disks to ZFS
> pools.
>
>
>
I'm guessing you didn't get a response because the first result on google
should have the answer you're looking for.  In any case, if memory serves
correctly, Jeff's blog should have all the info you need:
http://blogs.sun.com/bonwick/entry/raid_z


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk I/O in RAID-Z as new disks are added/removed

2009-11-15 Thread besson3c
Anybody?

I would truly appreciate some general, if not definite insight as to what one 
can expect in terms of I/O performance after adding new disks to ZFS pools.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs eradication

2009-11-15 Thread Joerg Moellenkamp
> 
>>   djm> Much better for jurisdictions that allow for that, but not all
>> not knowing where something physically is at all times?
> 
> I'm not in a position to discuss this jurisdictions requirements and 
> rationale on a public mailing list.  All I'm saying is that data destruction 
> base only on key destruction/unavailability is not considered enough in some 
> cases.

Nevertheless i think secure delete cannot exist without cryptography, as more 
and more devices are available where you can't control the placement of a 
block. As far as i know those secure delete in SSD is just capable to delete 
all data, not a single block. Thus the only save way to really delete securely 
would be the combination of both. When you can't delete a block on device 
securely, it's protected by the encryption as a last line of defense. However 
secure deletion by cryptography needs the secure deletion by overwriting the 
stuff, as you close the attack vector of simply waiting until the cryptographic 
algorithm is broken. One part can't exist without the other ...

Regards
 Joerg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss