[zfs-discuss] Why does disable loading of zfs module in boot causes problems

2006-12-26 Thread David Shwatrz
Hello,

I tried to disable loading of the zfs module by adding 
exclude zfs in /etc/system.

I rebooted and got into maintainance mode with many services disbaled (as 
svcs -xv shows).

I don't have any zfs partitons on this machine.
So my question is: for what is the zfs module in booting 
of solaris ? Why can't it be disabled? 

Regards,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-26 Thread Victor Latushkin



Darren J Moffat wrote:

Pawel Jakub Dawidek wrote:

I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important enough, you probably don't care too much about
performance. 


I'm not sure it will be that slow, the bleaching will be done in a 
separate (new) transaction group in most (probably all) cases anyway so 
it shouldn't really impact your write performance unless you are very 
I/O bound and already running near the limit.  However this is 
speculation until someone tries to implement this!




What happens if fatal failure occurs after the txg which frees blocks 
have been written but before before txg doing bleaching will be 
started/completed?



I for one would prefer encryption, which may turns out to be
much faster than bleaching and also more secure.


At least NIST, under I believe the guidance of the NSA, does not 
consider that encryption and key destruction alone is sufficient in all 
cases.  Which is why I'm proposing this as complementary.




True, dropping the keys leaves lots of encrypted material for determined 
cryptoanalytic to analyze, so it should be bleached in some good way.


Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-26 Thread Bill Sommerfeld
On Tue, 2006-12-26 at 14:01 +0300, Victor Latushkin wrote:

 What happens if fatal failure occurs after the txg which frees blocks 
 have been written but before before txg doing bleaching will be 
 started/completed?

clearly you'd need to store the unbleached list persistently in the
pool.

transactions which freed blocks (by punching holes in the allocation
space map) would instead or additionally move them to the unbleached
list; a separate bleaching task queue would pick blocks off the
unbleached list, bleach them; only once bleaching was complete would
they be removed from the unbleached list.

In the face of a crash, some blocks might get bleached twice.

- Bill





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-26 Thread Torrey McMahon

Bill Sommerfeld wrote:

On Tue, 2006-12-26 at 14:01 +0300, Victor Latushkin wrote:

  
What happens if fatal failure occurs after the txg which frees blocks 
have been written but before before txg doing bleaching will be 
started/completed?



clearly you'd need to store the unbleached list persistently in the
pool.


Which could then be easily referenced to find all the blocks that were 
recently deleted but not yet bleached? Is my paranoia running a bit too 
high?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-26 Thread Jason J. W. Williams

Hi Robert,

MPxIO had correctly moved the paths. More than one path to controller
A was OK, and one patch to controller A for each LUN was active when
controller B was rebooted.  I have a hunch that the array was at
fault, because it also rebooted a Windows server with LUNs only on
Controller A. In the case of the Windows server Engenios RDAC was
handling multipathing. Overall, not a big deal, I just wouldn't trust
the array to do a hitless commanded controller failover or firmware
upgrade.

-J

On 12/22/06, Robert Milkowski [EMAIL PROTECTED] wrote:

Hello Jason,

Friday, December 22, 2006, 5:55:38 PM, you wrote:

JJWW Just for what its worth, when we rebooted a controller in our array
JJWW (we pre-moved all the LUNs to the other controller), despite using
JJWW MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
JJWW correct controller when this occurred. Its not clear why ZFS thought
JJWW it lost a LUN but it did. We have done cable pulling using ZFS/MPXIO
JJWW before and that works very well. It may well be array-related in our
JJWW case, but I hate anyone to have a false sense of security.

Did you first check (with format for example) if LUNs were really
accessible? If MPxIO worked ok and at least one path is ok then ZFS
won't panic.

--
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Advice Wanted - sharing across multiple non global zones

2006-12-26 Thread Wes Williams
 Hi..
 After searching hi  low, I cannot find the answer
 for what I want to do (or at least
 understand how to do it). I am hopeful somebody can
 point me in the right direction.
 I have (2) non global zones (samba  www) I want to
 be able to have all user home 
 dir's served from zone samba AND be visable under
 zone www as the users public_html 
 dir. I have looked at delegating a dataset to samba
 and creating a new fs for each user but then I cannot
 share that with www. I also tried creating the fs
 under the global zone and mounting that via lofs but
 that did not seem to carry over each underlying fs
 and lost the quota capability. I cannot share via NFS
 since non global
 zones cannot mount from the same server.
 
 How can I achieve what I want to do?
 
 The requirements are:
 
 User Quotas (needs a file system for each user)
 Share file systems across multiple non global zones
 (rw)
 
 I have close to 3000 users so it must be a manageable
 approach and hopefully
 allow me to use the root preexec of samba to auto
 create user dir's.

Have a peek at this page:  
http://www.sun.com/software/solaris/howtoguides/s10securityhowto.jsp  I believe 
it may give you some insights to your final objective.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Saving scrub results before scrub completes

2006-12-26 Thread Siegfried Nikolaivich
Hello All,

I am wondering if there is a way to save the scrub results right before the 
scrub is complete.

After upgrading to Solaris 10U3 I still have ZFS panicing right as the scrub 
completes.  The scrub results seem to be cleared when system boots back up, 
so I never get a chance to see them.

Does anyone know of a simple way?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss