Re: [zfs-discuss] dm-crypt + ZFS on Linux

2012-11-30 Thread Darren J Moffat



On 11/30/12 11:41, Darren J Moffat wrote:



On 11/23/12 15:49, John Baxter wrote:

After searching for dm-crypt and ZFS on Linux and finding too little
information, I shall ask here. Please keep in mind this in the context
of running this in a production environment.

We have the need to encypt our data, approximately 30TB on three ZFS
volumes under Solaris 10. The volumes currently reside on iscsi sans
connected via 10Gb/s ethernet. We have tested Solaris 11 with ZFS
encrypted volumes and found the performance to be very poor and have an
open bug report with Oracle.


This "bug report" hasn't reached me yet and I'd really like to be sure
if there is a performance bug with ZFS that is unique to encryption I
can attempt to resolve it.

Can you please provide the bug and/or SR number that Oracle Support gave
to you.


For the sake of those on the list, I've got these references now.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Jan Owoc
On Fri, Nov 30, 2012 at 9:05 AM, Tomas Forsman  wrote:
>
> I don't have it readily at
> hand how to check the ashift value on a vdev, anyone
> else/archives/google?
>

This? ;-)
http://lmgtfy.com/?q=how+to+check+the+ashift+value+on+a+vdev&l=1

The first hit has:
# zdb mypool | grep ashift

Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Tomas Forsman
On 30 November, 2012 - Jim Klimov sent me these 2,3K bytes:

> On 2012-11-30 15:52, Tomas Forsman wrote:
>> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
>>
>>> Hi all,
>>>
>>> I would like to knwon if with ZFS it's possible to do something like that :
>>>
>>> http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
>
> Removing a disk - no, one still can not reduce the amount of devices
> in a zfs pool nor change raidzN redundancy levels (you can change
> single disks to mirrors and back), nor reduce disk size.
>
> As Tomas wrote, you can increase the disk size by replacing smaller
> ones with bigger ones.

.. unless you're hit by 512b/4k sector crap. I don't have it readily at
hand how to check the ashift value on a vdev, anyone
else/archives/google?

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] query re disk mirroring

2012-11-30 Thread Enda O'Connor
On 29/11/2012 14:51, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland -

Say I have an ldoms guest that is using zfs root pool that is mirrored,
and the two sides of the mirror are coming from two separate vds
servers, that is
mirror-0
c3d0s0
c4d0s0

where c3d0s0 is served by one vds server, and c4d0s0 is served by
another vds server.

Now if for some reason, this physical rig loses power, then how do I
know which side of the mirror to boot off, ie which side is most recent.


If one storage host goes down, it should be no big deal, one side of the mirror 
becomes degraded, and later when it comes up again, it resilvers.

If one storage host goes down, and the OS continues running for a while and 
then *everything* goes down, later you bring up both sides of the storage, and 
bring up the OS, and the OS will know which side is more current because of the 
higher TXG.  So the OS will resilver the old side.

If one storage host goes down, and the OS continues running for a while and 
then *everything* goes down...  Later you bring up only one half of the 
storage, and bring up the OS.  Then the pool will refuse to mount, because with 
missing devices, it doesn't know if maybe the other side is more current.

As long as one side of the mirror disappears and reappears while the OS is 
still running, no problem.

As long as all the devices are present during boot, no problem.

Only problem is when you try to boot from one side of a broken mirror.  If you need to do 
this, you should mark the broken mirror as broken before shutting down - Certainly detach 
would do the trick.  Perhaps "offline" might also do the trick.


thanks, from my testing,ie appears that if disk goes into UNAVAIL state 
and further data is written to the other disk, then even if I boot from 
the stale side of mirror, the boot process detects this and actually 
mounts the good side and resilvers the side I passed to boot arg.
If disk is FAULTED then booting from it results in the zfs panicing and 
telling me to boot the other side.


So it appears that some failure modes are handled well, others appear to 
result in the panic loop.


I have both sides in boot-device and both disks are available to OBP at 
boot time in my testing.


I'm just trying to determine optimal value for autoboot in my ldoms 
guests in the face of various failure modes.


thanks for the info
Enda




Does that answer it?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Jim Klimov

On 2012-11-30 15:52, Tomas Forsman wrote:

On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:


Hi all,

I would like to knwon if with ZFS it's possible to do something like that :

http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html


Removing a disk - no, one still can not reduce the amount of devices
in a zfs pool nor change raidzN redundancy levels (you can change
single disks to mirrors and back), nor reduce disk size.

As Tomas wrote, you can increase the disk size by replacing smaller
ones with bigger ones.

With sufficiently small starting disks and big new disks (i.e. moving
up from 1-2Tb to 4Tb) you can "cheat" by putting several partitions
on one drive and giving that to different pool components - if your
goal is to reduce the amount of hardware disks in the pool.

However, note that:

1) A single HDD becomes a SPOF, so you should put pieces of different
raidz sets onto particular disks - if a HDD dies, it does not bring
down a critical amount of pool components and does not kill the pool.

2) The disk mechanics will be "torn" between many requests to your
pool's top-level VDEVs, probably greatly reducing achievable IOPS
(since the TLVDEVs are accessed in parallel).

So while possible, this cheat is useful as a temporary measure -
i.e. while you migrate data and don't have enough drive bays to
hold the old and new disks, and want to be on the safe side by not
*removing* a good disk in order to replace it with a bigger one.
With this "cheat" you have all data safely redundantly stored on
disks at all time during migration. In the end this disk can be
the last piece of the puzzle in your migration.



meaning :

I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I've 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ?


You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
replace, if you don't have autoreplace on)
Repeat until done.
If you have the physical space, you can first put in a new disk, tell it
to replace and then remove the old.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Tomas Forsman
On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:

> Hi all,
> 
> I would like to knwon if with ZFS it's possible to do something like that :
> 
>   http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> 
> meaning : 
> 
> I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
> I've 36x 3T and 12 x 2T.
> Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> to migrate all data on those 12 old disk on the new and remove those old
> disk ? 

You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
replace, if you don't have autoreplace on)
Repeat until done.
If you have the physical space, you can first put in a new disk, tell it
to replace and then remove the old.

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove disk

2012-11-30 Thread Albert Shih
Hi all,

I would like to knwon if with ZFS it's possible to do something like that :

http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html

meaning : 

I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I've 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ? 

Regards.


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
ven 30 nov 2012 15:18:32 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dm-crypt + ZFS on Linux

2012-11-30 Thread Darren J Moffat



On 11/23/12 15:49, John Baxter wrote:

After searching for dm-crypt and ZFS on Linux and finding too little
information, I shall ask here. Please keep in mind this in the context
of running this in a production environment.

We have the need to encypt our data, approximately 30TB on three ZFS
volumes under Solaris 10. The volumes currently reside on iscsi sans
connected via 10Gb/s ethernet. We have tested Solaris 11 with ZFS
encrypted volumes and found the performance to be very poor and have an
open bug report with Oracle.


This "bug report" hasn't reached me yet and I'd really like to be sure 
if there is a performance bug with ZFS that is unique to encryption I 
can attempt to resolve it.


Can you please provide the bug and/or SR number that Oracle Support gave 
to you.



We are a Linux shop and since performance is so poor and still no
resolution, we are considering ZFS on Linux with dm-crypt.
I have read once or twice that if we implemented ZFS + dm-crypt we would
loose features, however which features are not specified.
We currently mirror the volumes across identical iscsi sans with ZFS and
we use hourly ZFS snapshots to update our DR site.

Which features of ZFS are lost if we use dm-crypt? My guess would be
they are related to raidz but unsure.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss