Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-19 Thread Roch - PAE


Jason J. W. Williams writes:
  Hi Jeremy,
  
  It would be nice if you could tell ZFS to turn off fsync() for ZIL
  writes on a per-zpool basis. That being said, I'm not sure there's a
  consensus on that...and I'm sure not smart enough to be a ZFS
  contributor. :-)
  
  The behavior is a reality we had to deal with and workaround, so I
  posted the instructions to hopefully help others in a similar boat.
  
  I think this is a valuable discussion point though...at least for us. :-)
  
  Best Regards,
  Jason
  

To Summarize:

Today, ZFS sends a ioctl to  the storage that says flush the
write  cache, while what it really  wants is, make sure data
is on stable storage.  The  Storage should then flush or not
the cache  depending on if   it is considered stable  or not
(only the storage knows that).

Soon  ZFS (more precisely SD)  will be sending a 'qualified'
ioctl to clarify the requested behavior.

Inparallel, Storage vendorshall be implementing that
qualified  ioctl.   ZFS  Customers  of third   party storage
probably have more influence to get those vendors to support
the qualified behavior.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690

With  SD fixed and Storage  vendor support, there will be no
more need to tune anything.

-r



  On 12/15/06, Jeremy Teo [EMAIL PROTECTED] wrote:
The instructions will tell you how to configure the array to ignore
SCSI cache flushes/syncs on Engenio arrays. If anyone has additional
instructions for other arrays, please let me know and I'll be happy to
add them!
  
   Wouldn't it be more appropriate to allow the administrator to disable
   ZFS from issuing the write cache enable command during a commit?
   (assuming expensive high end battery backed cache etc etc)
   --
   Regards,
   Jeremy
  
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Roch - PAE

Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?

-r

Al Hopper writes:
  On Sun, 17 Dec 2006, Ricardo Correia wrote:
  
   On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment?  What will and will not work?
   
 From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.
  
   This might answer your question:
   http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
  
  The section entitled Does ZFS work with SAN-attached devices? does not
  make it clear the (some would say) dire effects of not having pool
  redundancy.  I think that FAQ should clearly spell out the downside; i.e.,
  where ZFS will say (Sorry Charlie) pool is corrupt.
  
  A FAQ should always emphasize the real-world downsides to poor decisions
  made by the reader.   Not delivering bad news does the reader a
  dis-service IMHO.
  
  Regards,
  
  Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
 Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
  OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
   OpenSolaris Governing Board (OGB) Member - Feb 2006
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and SE 3511

2006-12-19 Thread Mike Seda

Anton B. Rang wrote:

I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these slices to a 
Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as listed below:



This is certainly a supportable configuration.  However, it's not an optimal 
one.
  

What would be the optimal configuration that you recommend?

You think that you have a 'concat' structure, but it's actually striped/RAID-0, 
because ZFS implicitly stripes across all of its top-level structures (your 
slices, in this case). This means that ZFS will constantly be writing data to 
addresses around 0, 50 GB, 100 GB, and 150 GB of each disk (presuming the first 
four slices are those you used). This will keep the disk arms constantly in 
motion, which isn't good for performance.

  

do you think my zfs configuration caused the drive failure?



I doubt it. I haven't investigated which disks ship in the 3511, but I would presume they are 
enterprise-class ATA drives, which can handle this type of head motion. (Standard ATA disks can 
overheat under a load which is heavy in seeks.)  Then again, the 3511 is marketed as a near-line 
rather than on-line array ... that may be simply because the SATA drives don't perform as well as 
FC.

I do see this note in the 3511 documentation: Note - Do not use a Sun StorEdge 3511 
SATA array to store single instances of data. It is more suitable for use in 
configurations where the array has a backup or archival role.

(I too am curious -- why do you consider yourself down? You've got a RAID 5, 
one disk is down, are you just worried about your current lack of redundancy? 
[I would be.] Will you be adding a hot spare?)
  
Yes, I am worried about the lack of redundancy. And, I have some new 
disks on order, at least one of which will be a hot spare.

Anton
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards


On Dec 18, 2006, at 17:52, Richard Elling wrote:

In general, the closer to the user you can make policy decisions,  
the better
decisions you can make.  The fact that we've had 10 years of RAID  
arrays
acting like dumb block devices doesn't mean that will continue for  
the next
10 years :-)  In the interim, we will see more and more  
intelligence move

closer to the user.


I thought this is what the T10 OSD spec was set up to address.  We've  
already

got device manufacturers beginning to design and code to the spec.

---
.je

(ps .. actually it's closer to 20+ years of RAID and dumb block  
devices ..)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Roch - PAE

Jonathan Edwards writes:
  On Dec 19, 2006, at 07:17, Roch - PAE wrote:
  
  
   Shouldn't there be a big warning when configuring a pool
   with no redundancy and/or should that not require a -f flag ?
  
  why?  what if the redundancy is below the pool .. should we
  warn that ZFS isn't directly involved in redundancy decisions?
  

I think so while pointing to the associated downside of doing that.

-r

  ---
  .je

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Nicolas Williams wrote:

On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams 
[EMAIL PROTECTED] wrote:

  I'd say go for both, (a) and (b).  Of course, (b) may not be easy to
  implement.
Another option would be to warn the user and set a flag on the shared block 
which causes it to be bleached when the last reference goes away.  Of 
course, one still might want to give the user the option of forcing 
immediate bleaching of the shared data.


Sure, but if I want something bleached I probably want it bleahced
_now_, not who knows when.


I think there are two related things here, given your comments and 
suggestions for a bleach(1) command and VOP/FOP implementation you are 
thinking about a completely different usage method than I am.


How do you /usr/bin/bleach the tmp file that your editor wrote to before 
it did the rename ?  You can't easily do that - if at all in some cases.


I'm looking for the systemic solution here not the end user controlled one.

For comparison what you are suggesting is like doing crypto with 
encrypt(1) it works on pathnames, where as what I'm suggesting is more 
like ZFS crypto it works inside ZFS with deep intimate knowledge of 
ZFS and requires zero change on behalf of the user or admin.


While I think having this in the VOP/FOP layer is interesting it isn't 
the problem I was trying to solve and to be completely honest I'm really 
not interested in solving this outside of ZFS - why make it easy for 
people to stay on UFS ;-)



But why set that per-file?  Why not per-dataset/volume?  Bleach all
blocks when they are freed automatically means bleaching blocks when
the lask reference is gone (as a result of an unlink of the last file
that had some block, say).


I didn't have anything per file, but exactly what you said.  The policy 
was when files are removed, when data sets are removed, when pools are 
removed.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Jeffrey Hutzelman wrote:



On Monday, December 18, 2006 05:51:14 PM -0600 Nicolas Williams 
[EMAIL PROTECTED] wrote:



On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:



On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams
[EMAIL PROTECTED] wrote:

 Or an iovec-style specification.  But really, how often will one 
prefer
 this to truncate-and-bleach?  Also, the to-be-bleached octet ranges 
may

 not be meaningful in snapshots/clones.  Hmmm.  That convinces me:
 truncate-and-bleach or bleach-and-zero, but not bleach individual 
octet

 ranges.

Well, consider a file with some structure, like a berkeley db database.
The application may well want to bleach each record as it is deleted.


My point is those byte ranges might differe from one version of that
file to another.


That byte range contains the data the application is trying to bleach in 
any version of the file which contains the affected block(s).  Obviously 
if the file has been modified and the data moved to someplace else, then 
your bleach won't affect the version(s) of the file before the change.  
But then, there's only so much you can do.


I explicitly do NOT want the applications involved in this, the whole 
point of my proposal being the way it was is that it works equally for 
all applications and no application code needs to or can be changed to 
change this behaviour.  Just like doing crypto in the filesystem vs 
doing it at the application layer.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Darren Reed wrote:

If/when ZFS supports this then it would be nice to also be able
to have Solaris bleach swap on ZFS when it shuts down or reboots.
Although it may be that this option needs to be put into how we
manage swap space and not specifically zomething for ZFS.

Doing this to swap space has been a kernel option on another very
widely spread operating system for at least 2 major OS releases...


Which ones ?  I know that MacOS X and OpenBSD both support encrypted 
swap which for swap IMO is a better way to solve this problem.


You can get that today with OpenSolaris by using the stuff in the loficc 
project.   You will also get encrypted swap when we have ZFS crypto and 
you swap on a ZVOL that is encrypted.


Note though that that isn't quite the same way as OpenBSD solves the 
encrypted swap problem, and I'm not familiar with the technical details 
of what Apple did in MacOS X.


Bleaching is a time consuming task, not something I'd want to do at 
system boot/halt.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

In case it wasn't clear I am NOT proposing a UI like this:

$ zfs bleach ~/Documents/company-finance.odp

Instead ~/Documents or ~ would be a ZFS file system with a policy set 
something like this:


# zfs set erase=file:zero

Or maybe more like this:

# zfs create -o erase=file -o erasemethod=zero homepool/darrenm

The goal is the same as the goal for things like compression in ZFS, no 
application change it is free for the applications.


All of the same reasons for doing crypto outside of a command like 
encrypt(1) apply here too - especially the temp file and rename problems.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Darren J Moffat

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to do 
corrections when the checksums (which always work) detect problems.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The size of a storage pool

2006-12-19 Thread Nathalie Poulet (IPSL)

Hello,
After an export and an importation, the size of the pool remains 
unchanged. As there were no data on this partition, I destroyed and 
recreate the pool. The size was indeed taken into account.


The correct size  is indicated by the order zpool list. The order df 
- k shows a size higher than the real size. The order zfs list shows 
a lower size. Why?


# df -k

data 3055288320 24 3055288244 1% /data

# zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT

data 2,89T 184K 2,89T 0% EN LIGNE -

# zfs list

NAME USED AVAIL REFER MOUNTPOINT

data 75,5K 2,85T 24,5K /data

Thanks.
Nathalie.



Hello Nathalie,

Monday, December 18, 2006, 2:14:29 PM, you wrote:

NPI I have a machine with ZFS connected to a SAN. The space of storage 
NPI increased on the SAN. The order format shows the increase in volume 
NPI well. But the size of the ZFS pool did not increase. What to make so 
NPI that zfs takes into account this increase in volume?


I haven't tested it but I belive right now you have to export pool,
using format put new label on disks so format will show you disks are
actually bigger, then re-import pool (or change slice sizes if you put
zfs on slices).

There's undergoing project to make it more automatic with ZFS.


 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The size of a storage pool

2006-12-19 Thread Tomas Ögren
On 19 December, 2006 - Nathalie Poulet (IPSL) sent me these 1,4K bytes:

 Hello,
 After an export and an importation, the size of the pool remains 
 unchanged. As there were no data on this partition, I destroyed and 
 recreate the pool. The size was indeed taken into account.
 
 The correct size  is indicated by the order zpool list. The order df 
 - k shows a size higher than the real size. The order zfs list shows 
 a lower size. Why?
 
 # df -k
 
 data 3055288320 24 3055288244 1% /data

% echo 3055288320/1024/1024/1024 | bc -lq
2.845458984375

Seems about the same.

 # zpool list
 
 NAME SIZE USED AVAIL CAP HEALTH ALTROOT
 
 data 2,89T 184K 2,89T 0% EN LIGNE -
 
 # zfs list
 
 NAME USED AVAIL REFER MOUNTPOINT
 
 data 75,5K 2,85T 24,5K /data

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Jonathan Edwards


On Dec 19, 2006, at 08:59, Darren J Moffat wrote:


Darren Reed wrote:

If/when ZFS supports this then it would be nice to also be able
to have Solaris bleach swap on ZFS when it shuts down or reboots.
Although it may be that this option needs to be put into how we
manage swap space and not specifically zomething for ZFS.
Doing this to swap space has been a kernel option on another very
widely spread operating system for at least 2 major OS releases...


Which ones ?  I know that MacOS X and OpenBSD both support  
encrypted swap which for swap IMO is a better way to solve this  
problem.


You can get that today with OpenSolaris by using the stuff in the  
loficc project.   You will also get encrypted swap when we have ZFS  
crypto and you swap on a ZVOL that is encrypted.


Note though that that isn't quite the same way as OpenBSD solves  
the encrypted swap problem, and I'm not familiar with the technical  
details of what Apple did in MacOS X.


there's an encryption option in the dynamic_pager to write out  
encrypted paging files (/var/vm/swapfile*) .. it gets turned on with  
an environment variable that gets set at boot (what happens when you  
choose secure virtual memory.)  Before this was implemented there was  
a workaround using an encrypted dmg that held the swap files .. but  
that was an incomplete solution.


Bleaching is a time consuming task, not something I'd want to do at  
system boot/halt.


particularly if we choose to do a 35 pass Gutmann algorithm .. :)

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Torrey McMahon

Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to do 
corrections when the checksums (which always work) detect problems.





We do not have the intelligent end-to-end management to make these 
judgments. Trying to make one layer of the stack {stronger, smarter, 
faster, bigger,} while ignoring the others doesn't help. Trying to make 
educated guesses as to what the user intends doesn't help either.


The first bug we'll get when adding a ZFS is not going to be able to 
fix data inconsistency problems error message to every pool creation or 
similar operation is going to be Need a flag to turn off the warning 
message...



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Jonathan Edwards


On Dec 18, 2006, at 11:54, Darren J Moffat wrote:


[EMAIL PROTECTED] wrote:
Rather than bleaching which doesn't always remove all stains, why  
can't
we use a word like erasing (which is hitherto unused for  
filesystem use

in Solaris, AFAIK)


and this method doesn't remove all stains from the disk anyway it  
just reduces them so they can't be easily seen ;-)


and if you add the right amount of ammonia is should remove  
everything .. (ahh - fun with trichloramine)


---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Darren J Moffat

Torrey McMahon wrote:

Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to do 
corrections when the checksums (which always work) detect problems.





We do not have the intelligent end-to-end management to make these 
judgments. Trying to make one layer of the stack {stronger, smarter, 
faster, bigger,} while ignoring the others doesn't help. Trying to make 
educated guesses as to what the user intends doesn't help either.


The first bug we'll get when adding a ZFS is not going to be able to 
fix data inconsistency problems error message to every pool creation or 
similar operation is going to be Need a flag to turn off the warning 
message...


said flag is 2/dev/null ;-)


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Nicolas Williams
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
 In case it wasn't clear I am NOT proposing a UI like this:
 
 $ zfs bleach ~/Documents/company-finance.odp
 
 Instead ~/Documents or ~ would be a ZFS file system with a policy set 
 something like this:
 
 # zfs set erase=file:zero
 
 Or maybe more like this:
 
 # zfs create -o erase=file -o erasemethod=zero homepool/darrenm

I get it.  This should be lots easier than bleach(1).  Snapshots/clones
are mostly not an issue here.  When a block is truly freed, then it is
wiped.

Clones are an issue here only if they have different settings for this
property than the FS that spawned them (so you might want to disallow
re-setting of this property).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Nicolas Williams wrote:

On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:

In case it wasn't clear I am NOT proposing a UI like this:

$ zfs bleach ~/Documents/company-finance.odp

Instead ~/Documents or ~ would be a ZFS file system with a policy set 
something like this:


# zfs set erase=file:zero

Or maybe more like this:

# zfs create -o erase=file -o erasemethod=zero homepool/darrenm


I get it.  This should be lots easier than bleach(1).  Snapshots/clones
are mostly not an issue here.  When a block is truly freed, then it is
wiped.


Yep.


Clones are an issue here only if they have different settings for this
property than the FS that spawned them (so you might want to disallow
re-setting of this property).


I think you are saying it should have INHERITY set to YES and EDIT set 
to NO.  We don't currently have any properties like that but crypto will 
need this as well - for a very similar reason with clones.



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Frank Hofmann

On Tue, 19 Dec 2006, Jonathan Edwards wrote:



On Dec 18, 2006, at 11:54, Darren J Moffat wrote:


[EMAIL PROTECTED] wrote:

Rather than bleaching which doesn't always remove all stains, why can't
we use a word like erasing (which is hitherto unused for filesystem use
in Solaris, AFAIK)


and this method doesn't remove all stains from the disk anyway it just 
reduces them so they can't be easily seen ;-)


and if you add the right amount of ammonia is should remove everything .. 
(ahh - fun with trichloramine)


Fluoric acid will dissolve the magnetic film on the platter as well as the 
platter itself. Always keep a PTFE bottle with the stuff in, just in case


;)

On the technical side, I don't think a new VOP will be needed. This could 
easily be done in VOP_SPACE together with a new per-fs property - bleach 
new block when it's allocated (aka VOP_SPACE directly, or in a backend 
also called e.g. on allocating writes / filling holes), bleach existing 
block when VOP_SPACE is used to stamp a hole into a file, aka a request 
is made to bleach the blocks of an existing file.
I.e. make the implementation behind ftruncate()/posix_fallocate() do the 
per-file bleaching if so desired. And that implementation is VOP_SPACE.


FrankH.




---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jonathan Edwards


On Dec 19, 2006, at 10:15, Torrey McMahon wrote:


Darren J Moffat wrote:

Jonathan Edwards wrote:

On Dec 19, 2006, at 07:17, Roch - PAE wrote:



Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?


why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Yes because if ZFS doesn't know about it then ZFS can't use it to  
do corrections when the checksums (which always work) detect  
problems.





We do not have the intelligent end-to-end management to make these  
judgments. Trying to make one layer of the stack {stronger,  
smarter, faster, bigger,} while ignoring the others doesn't help.  
Trying to make educated guesses as to what the user intends doesn't  
help either.


Hi! It looks like you're writing a block
 Would you like help?
- Get help writing the block
- Just write the block without help
- (Don't show me this tip again)

somehow I think we all know on some level that letting a system  
attempt to guess your intent will get pretty annoying after a while ..

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Nicolas Williams
On Tue, Dec 19, 2006 at 04:37:36PM +, Darren J Moffat wrote:
 I think you are saying it should have INHERITY set to YES and EDIT set 
 to NO.  We don't currently have any properties like that but crypto will 
 need this as well - for a very similar reason with clones.

What I mean is that if there's a block that's shared between two
filesystems then what do you do if the two filesystems have different
settings for this property?  IMO you shouldn't allow this to happen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and SE 3511

2006-12-19 Thread Anton Rang

On Dec 19, 2006, at 7:14 AM, Mike Seda wrote:


Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID  
5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave  
4 of these slices to a Solaris 10 U2 machine and added each of  
them to a concat (non-raid) zpool as listed below:




This is certainly a supportable configuration.  However, it's not  
an optimal one.



What would be the optimal configuration that you recommend?


If you don't need ZFS redundancy, I would recommend taking a single  
slice for your ZFS file system (e.g. 6 x 200 GB for other file  
systems, and 1 x 800 GB for the ZFS pool).  There would still be  
contention between the various file systems, but at least ZFS would  
be working with a single contiguous block of space on the array.


Because of the implicit striping in ZFS, what you have right now is  
analogous to taking a single disk, partitioning it into several  
partitions, then striping across those partitions -- it works, you  
can use all of the space, but there's a rearrangement which means  
that logically contiguous blocks on disk are no longer physically  
contiguous, hurting performance substantially.


Yes, I am worried about the lack of redundancy. And, I have some  
new disks on order, at least one of which will be a hot spare.


Glad to hear it.

Anton


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Frank Hofmann wrote:
On the technical side, I don't think a new VOP will be needed. This 
could easily be done in VOP_SPACE together with a new per-fs property - 
bleach new block when it's allocated (aka VOP_SPACE directly, or in a 
backend also called e.g. on allocating writes / filling holes), bleach 
existing block when VOP_SPACE is used to stamp a hole into a file, aka 
a request is made to bleach the blocks of an existing file.
I.e. make the implementation behind ftruncate()/posix_fallocate() do the 
per-file bleaching if so desired. And that implementation is VOP_SPACE.


That isn't solving the problem though, it solves a different problem.

The problem that I want to be solved is that as files/datasets/pools are 
deleted (not as they are allocated) they are bleached.  In the cases 
there would not be a call to posix_fallocate() or ftruncate(), instead 
an unlink(2) or a zfs destory or zpool destroy.  Also on hotsparing in a 
disk - if the old disk can still be written to in some way we should do 
our best to bleach it.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS in a SAN environment

2006-12-19 Thread Anton B. Rang
 I thought this is what the T10 OSD spec was set up to address.  We've already
 got device manufacturers beginning to design and code to the spec.

Precisely. The interface to block-based devices forces much of the knowledge 
that the file system and application have about access patterns to be thrown 
away before the device gets involved. The current OSD specification allows 
additional knowledge through (Host X is accessing range Y of file Z.) I'm 
hopeful that future revisions will go even further, allowing knowledge such as 
Process A on host X is accessing range Y of file Z, or even allowing 
processes/streams to be managed across multiple hosts.)

OSD allows attributes as well; individual files could be tagged for a 
redundancy level, for instance.

(To make this relevant to this ZFS discussion, perhaps it's worth pointing out 
that ZFS would make an interesting starting point for certain types of OSD 
implementation.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and SE 3511

2006-12-19 Thread Richard Elling

sidetracking below...

Matt Ingenthron wrote:

Mike Seda wrote:


Basically, is this a supported zfs configuration? 
Can't see why not, but support or not is something only Sun support can 
speak for, not this mailing list.


You say you lost access to the array though-- a full disk failure 
shouldn't cause this if you were using RAID-5 on the array.  Perhaps you 
mean you've had to take it out of production because it couldn't keep up 
with the expected workload?
You are gonna laugh, but do you think my zfs configuration caused the 
drive failure? 
You mention this is a new array.  As one Sun person (whose name I can't 
remember) mentioned to me, there's a high 'infant mortality' rate among 
semiconductors.  Components that are going to fail will either do so in 
the first 120 days or so, or will run for many years.


We don't use the term infant mortality because it elicits the wrong
emotion.  We use early life failures instead.

I'm no expert in the area though and I have no data to prove it, but it 
has felt somewhat true as I've seen new systems set up over the years.  
A quick search for semiconductor infant mortality turned up some 
interesting results.


We (Sun) do have the data and we track it rather closely.  If a product
shows a higher than expected early life failure rate then we investigate
the issue and take corrective action.

In general, semiconductor ELFs are discovered through the burn-in tests at
the factory.  However, there are some mechanical issues which can occur
during shipping [1].  And, of course, you can just be unlucky.  In any case,
I hope that the replacements arrive soon and work well.

[1] FOB origin is common -- a manufacturer's best friend?
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Richard Elling

Torrey McMahon wrote:
The first bug we'll get when adding a ZFS is not going to be able to 
fix data inconsistency problems error message to every pool creation or 
similar operation is going to be Need a flag to turn off the warning 
message...


Richard pines for ditto blocks for data...
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS and SE 3511

2006-12-19 Thread Jason J. W. Williams

I do see this note in the 3511 documentation: Note - Do not use a Sun StorEdge 3511 
SATA array to store single instances of data. It is more suitable for use in 
configurations where the array has a backup or archival role.


My understanding of this particular scare-tactic wording (its also in
the SANnet II OEM version manual almost verbatim) is that it has
mostly to do with the relative unreliability of SATA firmware versus
SCSI/FC firmware. Its possible that the disks are lower quality SATA
disks too, but that was not what was relayed to us when we looked at
buying the 3511 from Sun or the DotHill version (SANnet II).


Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams

 Shouldn't there be a big warning when configuring a pool
 with no redundancy and/or should that not require a -f flag ?

why?  what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?


Because if the host controller port goes flaky and starts introducing
checksum errors at the block level (a lady a few weeks ago reported
this) ZFS will kernel panic, and most users won't expect it.  Users
should be warned it seems to me to the real possibility of a kernel
panic if they don't implement redundancy at the zpool level. Just my 2
cents.

Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't find my pool

2006-12-19 Thread Rince

On 12/19/06, Brian Hechinger [EMAIL PROTECTED] wrote:


I'm trying to upgrade my desktop at work.  It used to have a 10G
partition with Windows on it and the rest of the disk was for
Solaris.  Windows pissed me off one too many times and got turned
into a 10G swap partition.

Because of the way this was all setup in the first place (poorly)
Solaris won't let me do an Upgrade of the current config.  Not a
huge deal, when I first set it up I really didn't give myself enough
space for the OS (aka non-ZFS) so I am going to install the new
Solaris (Build50) into that first partition of 10G.

I can't seem to access ZFS on the second partition however.  There
are several slices on that partition, ZFS being one of them.

How do I get ZFS to find it?

Thanks!!

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



zpool import should give you a list of all the pools ZFS sees as being
mountable. zpool import [poolname] is also, conveniently, the command used
to mount the pool afterward. :)

If it doesn't show up there, I'll be surprised.

- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-19 Thread Jason J. W. Williams

Hi Roch,

That sounds like a most excellent resolution to me. :-) I believe
Engenio devices support SBC-2. It seems to me making intelligent
decisions for end-users is generally a good policy.

Best Regards,
Jason

On 12/19/06, Roch - PAE [EMAIL PROTECTED] wrote:



Jason J. W. Williams writes:
  Hi Jeremy,
 
  It would be nice if you could tell ZFS to turn off fsync() for ZIL
  writes on a per-zpool basis. That being said, I'm not sure there's a
  consensus on that...and I'm sure not smart enough to be a ZFS
  contributor. :-)
 
  The behavior is a reality we had to deal with and workaround, so I
  posted the instructions to hopefully help others in a similar boat.
 
  I think this is a valuable discussion point though...at least for us. :-)
 
  Best Regards,
  Jason
 

To Summarize:

Today, ZFS sends a ioctl to  the storage that says flush the
write  cache, while what it really  wants is, make sure data
is on stable storage.  The  Storage should then flush or not
the cache  depending on if   it is considered stable  or not
(only the storage knows that).

Soon  ZFS (more precisely SD)  will be sending a 'qualified'
ioctl to clarify the requested behavior.

Inparallel, Storage vendorshall be implementing that
qualified  ioctl.   ZFS  Customers  of third   party storage
probably have more influence to get those vendors to support
the qualified behavior.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690

With  SD fixed and Storage  vendor support, there will be no
more need to tune anything.

-r



  On 12/15/06, Jeremy Teo [EMAIL PROTECTED] wrote:
The instructions will tell you how to configure the array to ignore
SCSI cache flushes/syncs on Engenio arrays. If anyone has additional
instructions for other arrays, please let me know and I'll be happy to
add them!
  
   Wouldn't it be more appropriate to allow the administrator to disable
   ZFS from issuing the write cache enable command during a commit?
   (assuming expensive high end battery backed cache etc etc)
   --
   Regards,
   Jeremy
  
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Bill Sommerfeld
On Mon, 2006-12-18 at 16:05 +, Darren J Moffat wrote:
 6) When modifying any file you want to bleach the old blocks in a way 
 very simlar to case 1 above.

I think this is the crux of the problem.  If you fail to solve it, you
can't meaningfully say that all blocks which once contained parts of a
file have been overwritten and instead have to fall back on a bleach
all unallocated blocks in the pool.

And if you can solve this one, I think you get cases 1 and 2 for free.

I think the way to go here is to create a file, dataset, and/or pool
property which turns on bleach on free; any blocks freed after this
property set will be appropriately bleached.

Other issues:
 - in some threat models, overwrite by zero is sufficient; in others,
you need multiple passes of overwrite with specific data patterns.

 - If you're going to immediately reuse a block, do you need to bleach
before reallocation, or is mere overwrite by different allocated data
considered sufficient?

There also may be a reason to do this when confidentiality isn't
required: as a sparse provisioning hack..

If you were to build a zfs pool out of compressed zvols backed by
another pool, then it would be very convenient if you could run in a
mode where freed blocks were overwritten by zeros when they were freed,
because this would permit the underlying compressed zvol to free *its*
blocks.

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Robert Milkowski
Hello Jason,

Tuesday, December 19, 2006, 8:54:09 PM, you wrote:

  Shouldn't there be a big warning when configuring a pool
  with no redundancy and/or should that not require a -f flag ?

 why?  what if the redundancy is below the pool .. should we
 warn that ZFS isn't directly involved in redundancy decisions?

JJWW Because if the host controller port goes flaky and starts introducing
JJWW checksum errors at the block level (a lady a few weeks ago reported
JJWW this) ZFS will kernel panic, and most users won't expect it.  Users
JJWW should be warned it seems to me to the real possibility of a kernel
JJWW panic if they don't implement redundancy at the zpool level. Just my 2
JJWW cents.

I don't agree - do not assume sys admin is complete idiot.
Sure, lets create GUI and other 'inteligent' creators which are for
very beginner users with no understanding at all.

Maybe we need something like vxassist (zfsassist?)?



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't find my pool

2006-12-19 Thread Brian Hechinger
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
 
 zpool import should give you a list of all the pools ZFS sees as being
 mountable. zpool import [poolname] is also, conveniently, the command used
 to mount the pool afterward. :)

Which is what I expected to happen, however.

 If it doesn't show up there, I'll be surprised.

Be prepared to be surprised.  ;)

zpool import doesn't see the zpool.  To make matters worse i don't seem to be 
able
to get into my old install.  ;)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't find my pool

2006-12-19 Thread Brian Hechinger
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
 
 If it doesn't show up there, I'll be surprised.

I take that back, I just managed to restore my ability to boot the old
instance.

I will be making backups and starting clean, this old partitioning has
screwed me up for the last time.

Thanks!!!

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Nicolas Williams
On Tue, Dec 19, 2006 at 03:09:03PM -0500, Jeffrey Hutzelman wrote:
 
 
 On Tuesday, December 19, 2006 01:54:56 PM + Darren J Moffat 
 [EMAIL PROTECTED] wrote:
 
 While I think having this in the VOP/FOP layer is interesting it isn't
 the problem I was trying to solve and to be completely honest I'm really
 not interested in solving this outside of ZFS - why make it easy for
 people to stay on UFS ;-)
 
 Because as great as ZFS is, someday someone is going to run into a problem 
 that it doesn't solve.  Having the right abstraction to begin with will 
 make that day easier when it comes.

I understand what Darren was proposing now.  He's talking about wiping
blocks as they are freed.

I initially thought he meant something like a guarantee on file deletion
that the file's data is gone -- but snapshots and clones are in conflict
with that, but not with wiping blocks as they are freed.

Now, if we had a bleach(1) operation, then we'd need a bleach(2) and a
VOP_BLEACH and fop_bleach.  But that's not what Darren is proposing.

 I didn't have anything per file, but exactly what you said.  The policy
 was when files are removed, when data sets are removed, when pools are
 removed.
 
 Well, that's great for situations where things actually get removed.  It's 
 not so great for things that get rewritten rather than removed, and it 
 seems nearly useless for vdevs.  I think there's some benefit to making the 
 functionality directly available to user-mode, but more importantly, 
 there's a definite advantage to a system in which the user knows that a 
 file was bleached when they removed it, and not decades later when someone 
 gets around to removing a stray snapshot.  That difference can have serious 
 legal and/or intelligence implications.

Yes, I think that a bleach operation that forcefully removes a file's
contents even in all snapshots and clones, could be useful.  But I'm not
sure that we can get it.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[4]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Robert Milkowski
Hello Jason,

Tuesday, December 19, 2006, 11:23:56 PM, you wrote:

JJWW Hi Robert,

JJWW I don't think its about assuming the admin is an idiot. It happened to
JJWW me in development and I didn't expect it...I hope I'm not an idiot.
JJWW :-)

JJWW Just observing the list, a fair amount of people don't expect it. The
JJWW likelihood you'll miss this one little bit of very important
JJWW information in the manual or man page is pretty high. So it would be
JJWW nice if an informational message appeared saying something like:

JJWW INFORMATION: If a member of this striped zpool becomes unavailable or
JJWW develops corruption, Solaris will kernel panic and reboot to protect
JJWW your data.

JJWW I definitely wouldn't require any sort of acknowledgment of this
JJWW message, such as requiring a -f flag to continue.

First sorry for my wording - no offense to anyone was meant.

I don't know it's like changing every tool in system so:

  # rm file
  INFORMATION: by removing file you won't be able to read it again

  # mv fileA fileB
  INFORMATION: by moving fileA to fileB you won't be able 

  # reboot
  INFORMATION: by rebooting server it won't be up for some time


I don't know such behavior is desired.
If someone don't understand basic RAID concepts then perhaps some
assistant utilities (gui or cli) is more appropriate for them, like
Veritas did. But putting warning messages here and there to inform
user that he probably doesn't know what is he doing isn't a good
option.

Perhaps zpool status should explicitly show stripe groups with word
stripe, like:

   home
 stripe
   c0t0d0
   c0t1d0

So it will be more clear to people what they actually configured.
I would really hate a system informing me on every command that I
possibly don't know what I'm doing.


Maybe just a wrapper:

zfsassist redundant space-optimized disk0 disk1 disk2
zfsassist redundant speed-optimized disk0 disk1 disk2
zfsassist non-redundant disk0 disk1 disk2

you get the idea.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-12-19 Thread Eric Boutilier

For background on what this is, see:

http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200

=
zfs-discuss 12/01 - 12/15
=

Size of all threads during period:

Thread size Topic
--- -
 33   Production ZFS Server Death (06/06)
 26   ZFS related kernel panic
 23   ZFS Storage Pool advice
 23   A Plea for Help: Thumper/ZFS/NFS/B43
 18   Uber block corruption?
 16   weird thing with zfs
 16   ZFS Usage in Warehousing (lengthy intro)
 16   Netapp to Solaris/ZFS issues
 12   ZFS failover without multipathing
  9   need Clarification on ZFS
  9   ZFS on a damaged disk
  9   System pause peculiarity with mysql on zfs
  9   Managed to corrupt my pool
  9   Kickstart hot spare attachment
  8   replacing a drive in a raidz vdev
  8   Sol10u3 -- is du bug fixed?
  8   Disappearing directories
  8   Corrupted pool
  7   zfs exported a live filesystem
  7   ZFS compression / ARC interaction
  7   Need Clarification on ZFS quota property.
  7   How to get new ZFS Solaris 10 U3 features going from Solaris 10 U2
  7   Can't destroy corrupted pool
  6   zpool import takes to long with large numbers of file systems
  6   SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS
  5   ZFS on multi-volume
  5   ZFS and write caching (SATA)
  5   Vanity ZVOL paths?
  5   Monitoring ZFS
  5   Instructions for ignoring ZFS write cache flushing on intelligent 
arrays
  4   ZFS works in waves
  4   ZFS in a SAN environment
  4   ZFS Usage in Warehousing (no more lengthy intro)
  4   ZFS Corruption
  4   Shared ZFS pools
  4   Creating zfs filesystem on a partition with ufs - Newbie
  3   zpool mirror
  3   raidz DEGRADED state
  3   hardware planning for storage server
  3   ZFS with Samba Shadow Copy
  3   ZFS questions
  3   ZFS behavior under heavy load (I/O that is)
  3   ZFS Usage in Warehousing (lengthy intro, now slightly OT)
  3   Snapshots impact on performance
  3   Need Help on ZFS.
  3   Limitations of ZFS
  3   How to do DIRECT IO on ZFS ?
  3   How does zfs mount at boot? How to let the system not to mount 
zfs?
  2   need help to install flash player for solaris 10.
  2   doubt on solaris 10
  2   ZFS gui to create RAID 1+0 pools
  2   ZFS bootability target
  2   ZFS and ISCSI
  2   Some ZFS questions
  2   Solaris 11/06 + iscsi integration
  2   Report
  2   Performance problems during 'destroy' (and bizzare Zone problem 
as well)
  2   It ready
  2   Doubt on solaris 10 installation ..
  1   ztest - ZFS developer maintained test suite
  1   weird problem
  1   it's me Lester
  1   fwd: Lacey
  1   disappearing mount - expected behavior?
  1   ZFS problems
  1   ZFS NFS/Samba server advice
  1   Tired of VxVM - too many issues and too  - Maybe ZFS as 
alternative
  1   Rosa FINANCIAL REPORT
  1   Ramona check this.
  1   Mayer advice
  1   Maxwell
  1   Lockhart advice
  1   Kelly advice
  1   Jermaine
  1   How to get new ZFS Solaris 10 U3 features going
  1   Greetings Rick
  1   Greetings Reid
  1   Greetings Luz
  1   Good Morning Guadalupe
  1   Gee advice
  1   Elliot FINANCIAL REPORT
  1   Dale advice
  1   Cunningham advice
  1   Basil check this.


Posting activity by person for period:

# of posts  By
--   --
 31   anton.rang at sun.com (anton b. rang)
 27   rmilkowski at task.gda.pl (robert milkowski)
 24   richard.elling at sun.com (richard elling)
 16   jasonjwwilliams at gmail.com (jason j. w. williams)
 13   jfh at cise.ufl.edu (jim hranicky)
 12   eric.kustarz at sun.com (eric kustarz)
 12   ddunham at taos.com (darren dunham)
 10   chad at shire.net (chad leigh -- shire.net llc)
  9   al at logical-approach.com (al hopper)
  8   roch.bourbonnais at sun.com (roch - pae)
  8   d_mastan at yahoo.com (dudekula mastan)
  7   rcorreia at wizy.org (ricardo correia)
  7   krzys at perfekt.net (krzys)
  7   darren.moffat at sun.com (darren j moffat)
  7   casper.dik at sun.com (casper dik)
  6   wheakory at isu.edu (kory wheatley)
  6   

Re: [security-discuss] Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Bill Sommerfeld
On Tue, 2006-12-19 at 16:19 -0800, Matthew Ahrens wrote:
 Darren J Moffat wrote:
  I believe that ZFS should provide a method of bleaching a disk or part 
  of it that works without crypto having ever been involved.
 
 I see two use cases here:

I agree with your two, but I think I see a third use case in Darren's
example: bleaching disks as they are removed from a pool.

We may need a second dimension controlling *how* to bleach..

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-12-19 Thread Al Hopper

Thanks a lot Eric.
But were'nt you supposed to be on vacation!?

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't find my pool

2006-12-19 Thread Rince

On 12/19/06, Brian Hechinger [EMAIL PROTECTED] wrote:


On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:

 If it doesn't show up there, I'll be surprised.

I take that back, I just managed to restore my ability to boot the old
instance.

I will be making backups and starting clean, this old partitioning has
screwed me up for the last time.

Thanks!!!

-brian



What exactly did it say? Did it say there are some pools that couldn't be
imported, use zpool import -f to see them, or just no pools available?

If not, then I suspect that Solaris install didn't see the relevant disk
slices. devfsadm -c disk should populate /dev/dsk or others as
appropriate.

- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re[2]: ZFS in a SAN environment

2006-12-19 Thread Anton B. Rang
 INFORMATION: If a member of this striped zpool becomes unavailable or
 develops corruption, Solaris will kernel panic and reboot to protect your 
 data.

OK, I'm puzzled.

Am I the only one on this list who believes that a kernel panic, instead of 
EIO, represents a bug?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re[2]: ZFS in a SAN environment

2006-12-19 Thread Torrey McMahon

Anton B. Rang wrote:

INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your data.



OK, I'm puzzled.

Am I the only one on this list who believes that a kernel panic, instead of 
EIO, represents a bug?
 
  



Nope.  I'm with you.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [security-discuss] Re: [zfs-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Matthew Ahrens

Bill Sommerfeld wrote:

On Tue, 2006-12-19 at 16:19 -0800, Matthew Ahrens wrote:

Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part 
of it that works without crypto having ever been involved.

I see two use cases here:


I agree with your two, but I think I see a third use case in Darren's
example: bleaching disks as they are removed from a pool.


That sounds plausible too.  (And you could implement it as 'zfs destroy 
-r pool; zpool bleach pool'



We may need a second dimension controlling *how* to bleach..


You mean whether we do single overwrite with zeros, muliple overwrites 
with some crazy government-mandated patterns, etc, right?  That's what I 
meant by the value of the property can specify what type of bleach to 
use (perhaps taking the metaphor a bit too far) for example, 'zfs set 
bleach=how fs'.  Like other properties, we would provide bleach=on 
which would choose a reasonable default.  We'd need something similar 
with 'zpool bleach' (eg 'zpool bleach [-o how] pool').


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The size of a storage pool

2006-12-19 Thread Matthew Ahrens

Nathalie Poulet (IPSL) wrote:

Hello,
After an export and an importation, the size of the pool remains 
unchanged. As there were no data on this partition, I destroyed and 
recreate the pool. The size was indeed taken into account.


The correct size  is indicated by the order zpool list. The order df 
- k shows a size higher than the real size. The order zfs list shows 
a lower size. Why?


As Tomas pointed out, zfs list and df -k show the same size.  zpool 
list shows slightly more, because it does its accounting differently, 
taking into account only actual blocks allocated, whereas the others 
show usable space, taking into account the small amount of space we 
reserve for allocation efficiency (as well as quotas or reservations, if 
you have them).


The fact that 'zpool list' shows the raw values is bug 6308817 
discrepancy between zfs and zpool space accounting. 



--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re[2]: ZFS in a SAN environment

2006-12-19 Thread Dennis Clarke

 Anton B. Rang wrote:
 INFORMATION: If a member of this striped zpool becomes unavailable or
 develops corruption, Solaris will kernel panic and reboot to protect your
 data.


 OK, I'm puzzled.

 Am I the only one on this list who believes that a kernel panic, instead
 of EIO, represents a bug?


 Nope.  I'm with you.

no no .. its a feature.  :-P

If it walks like a duck and quacks like a duck then its a duck.

a kernel panic that brings down a system is a bug.  Plain and simple.

Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss