[zfs-discuss] Fwd: ? ZFS encryption with root pool

2009-05-05 Thread Vladimir 'phcoder' Serbinenko
On Tue, May 5, 2009 at 12:40 PM, Darren J Moffat darr...@opensolaris.orgwrote:

 Vladimir 'phcoder' Serbinenko wrote:



 On Fri, May 1, 2009 at 10:57 AM, Darren J Moffat 
 darr...@opensolaris.orgmailto:
 darr...@opensolaris.org wrote:

Ulrich Graef wrote:

According: ZFS encryption

Will it be possible to have an encrypted root pool?


We don't encrypt pools, we encrypt datasets.  This is the same as
what is done for compression.

It will be possible in the initial integration to have encrypted
datasets in the root pool.  However the bootfs dataset can not be
encrypted nor can /var or /usr if you have split those off into
separate datasets.

 bootfs encryption should be quite possible. Especially that now we have
 initial zfs support patch for grub2 by me and a luks patch by Michael Gorven


 Can you send me a pointer to this please.  Is there a reason why you think
 this is more possible with grub2 than with the 0.97 grub that OpenSolaris
 currently uses ?

http://lists.gnu.org/archive/html/grub-devel/2009-04/msg00512.html
Any developement of grub-0.97 was halted and our efforts concentrate on
grub2. Grub2 has much more flexible design. The points that help in zfs
developement on it are the following:
- real memory allocation. No need to implement a stack or use fixed
addresses
- possibility of opening multiple devices at the same time
- scripting support allows booting from compilcated scenarios.
- we already have the cryptography support with Michael Gorven's patch.
- everything else I forgot right now because I don't use or code for grub1.
While I can't stop you from coding on grub-legacy this would be a waste of
resources and struggle against grub-legacy limitations which will be
depreceated anyway soon



 Integration with TPM?

 It's both useless and dangerous. What would you use it for? If it's for
 storing the key then It would be like giving a key to someone else and
 hoping that he is trustworthy enough. The moment when a crypto geek will
 find a way to retrieve the key from the tpm (if tpm reach popularity it will
 just be a question of time) your encryption is useless. As a matter of fact
 from this point of view tpm is mere an obfuscation. On the other hand tpm is
 usefull to coerce the user into using a particular software or complying
 with restrictions decided by a big company. All of this was discussed here:
 http://lists.gnu.org/archive/html/grub-devel/2009-02/threads.html#00232


 You are making assumptions about how ZFS crypto would use the TPM and I
 haven't design this as yet.  However one thing I know for sure is that it
 would not be just a key in the TPM there would be something the user would
 still have to enter by other means.

If you enter a passphrase anyway then the only possible scenario where a
data can't be accessed because of TPM is when you put your disks in another
machine, TPM chip doesn't give the authentication key TPM chip fails. I
don't see why someone who knows a passphrase wouldn't be allowed to change a
motherboard. It looks more like a way for a mobo manufacturer to make you
stay with him. As for the second it only could theoretically prevent crafted
copy from accessing encrypted data it does not prevent you from entering
passphrase into crafted copy. So an attacker would just put a version which
simulates the password prompt sends it over network and then restores
original copy and reboots. As you see circumventing the tpm protection
only took one additional step (restoring original version). As for the third
one I think anyone would be unpleased if his redundant zpool has suddenly
became inaccessible just because a chip failed. Using TPM makes any
redundancy irrelevant


 --
 Darren J Moffat




-- 
Regards
Vladimir 'phcoder' Serbinenko



-- 
Regards
Vladimir 'phcoder' Serbinenko
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Torrey McMahon
Before I put one in ... anyone else seen one? Seems we support 
compression on the root pool but there is no way to enable it at install 
time outside of a custom script you run before the installer. I'm 
thinking it should be a real install time option, have a jumpstart 
keyword, etc.  Same with copies=2


Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SAS 15K drives as L2ARC

2009-05-05 Thread Roger Solano

Hello,

Does it make any sense to  use a bunch of 15K SAS drives as L2ARC cache 
for several TBs of SATA disks?


For example:

A STK2540 storage array with this configuration:

   * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
   * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.

I was thinking about using disks from Tray 1 as L2ARC for Tray 2 and put 
all of these disks in one (1) zfs storage pool.


This pool would be used mainly as astronomical images repository, shared 
via NFS over a Sun Fire X2200.


Is it worth to do?

Thanks in advance for any help.

Regards,
Roger


--

http://www.sun.com  *Roger Solano*
Arquitecto de Soluciones
Región ACC - Venezuela


*Sun Microsystems, Inc.*
Teléfono: +58-212-905-3800
Fax: +58-212-905-3811
Email: roger.sol...@sun.com
INFORMACION:  Este  mensaje  está  destinado para  uso  exclusivo  del 
destinatarioypuedecontenermaterialeinformación 
confidencial.  Esta prohibida cualquier  revisión, uso,  publicación o 
distribución no autorizada del material  o información. Si usted no es 
el  destinatario correcto,  por favor  contactar a  través  del correo 
electrónico a  la persona que  envía la comunicación y  destruya todas 
las copias del mensaje original.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Ellis, Mike
How about a generic zfs options field in the JumpStart profile?
(essentially an area where options can be specified that are all applied
to the boot-pool (with provisions to deal with a broken-out-var))

That should future proof things to some extent allowing for
compression=x, copies=x, blocksize=x, zfsPool-version=x,
checksum=sha256, future-dedup, future-crypto, and other such goodies.
(by just passing the keywords and having ZFS deal with it on the other
end, the jumpstart code can remain quite static, while the zfs-side
gradually introduces the new features. 

Just a thought,

 -- MikeE

PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was delivered as part of the OpenSolaris SPARC
preview. Can anyone provide any background/doc-link on that new
JumpStart framework?




-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Torrey McMahon
Sent: Tuesday, May 05, 2009 6:38 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Compression/copies on root pool RFE

Before I put one in ... anyone else seen one? Seems we support 
compression on the root pool but there is no way to enable it at install

time outside of a custom script you run before the installer. I'm 
thinking it should be a real install time option, have a jumpstart 
keyword, etc.  Same with copies=2

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-05 Thread Robert Milkowski




Hello Roger,

Tuesday, May 5, 2009, 9:07:22 PM, you wrote:







Hello,

Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks?

For example:

A STK2540 storage array with this configuration:


Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.

I was thinking about using disks from Tray 1 as L2ARC for Tray 2 and put all of these disks in one (1) zfs storage pool.

This pool would be used mainly as astronomical images repository, shared via NFS over a Sun Fire X2200.

Is it worth to do?

--






I guess your files are astronomically big :)
But seriously - if your files are large and you expect to access them in large chunks sequentially then SATA drives can actually deliver same performance as your 15k SAS disks as you will be throughput bound. If it is the case than separating those 15ks as L2ARC probably doesn't make sense.

If you expect lots of writes and small random reads when a relatively large part of working set will fit into L2ARC then it might make a sense.

It also depends on how you configure your sata drives - for example you will usually get more benefit from L2ARC if your pools is raid-z[2] compared, except the case of a single-stream large I/O sequantial reader to raid-10.


In summary - it all depends on your workload...


--
Best regards,
Robert  Milkowski
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snapshot management issues

2009-05-05 Thread Edward Pilatowicz
hey all,

so recently i wrote some zones code to manage zones on zfs datasets.
the code i wrote did things like rename snapshots and promote
filesystems.  while doing this work, i found a few zfs behaviours that,
if changed, could greatly simplify my work.

the primary issue i hit was that when renaming a snapshot, any clones
derived from those snapshot are unmounted/remounted.  promoting a
dataset (which results in snapshots being moved from one dataset to
another) doesn't result in any clones being unmounted/remounted.  this
made me wonder if the mount cycle caused by renames is actually
necessary or is it just an artifact of the current implementation?
removing this unmount/remount would greatly simplify my dataset
management code.  (the snapshot rename can also fail if any clones are
zoned or in use, so eliminating these mount operations would remove one
potential failure mode for zone administration operations.)

this problem was compounded by the fact that all the clone filesystems i
was dealing with were zoned.  the code i wrote runs in the global zone
and zfs prevents the global zone from mounting or unmounting zoned
filesystems.  (so my code additionally had to manipulate the zoned
attribute for cloned datasets.)  hence, if there's no way to eliminate
the need to unmount/remount filesystems when renaming snapshots, how
would people feel about adding a option to zfs/libzfs to be able to
override the restrictions imposed by the zoned attribute?

ed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-05 Thread Rob Logan


  use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks?

perhaps... depends on the workload, and if the working set
can live on the L2ARC

 used mainly as astronomical images repository

hmm, perhaps two trays of 1T SATA drives all
mirrors rather than raidz sets of one tray.

ie: pls don't discount how one arranges the vdev
in a given configuration.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Mike Gerdts
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
 PS: At one point the old JumpStart code was encumbered, and the
 community wasn't able to assist. I haven't looked at the next-gen
 jumpstart framework that was delivered as part of the OpenSolaris SPARC
 preview. Can anyone provide any background/doc-link on that new
 JumpStart framework?

I think you are looking for the Caiman project.  The replacement for
jumpstart is the Automated Installer (AI).

http://src.opensolaris.org/source/xref/caiman/AI_source/usr/src/
http://www.opensolaris.org/os/project/caiman/auto_install/

This is mostly code developed in the open.  I'm not aware of any
closed pieces that are specific to the new installer.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss