Re: [zfs-discuss] zpool remove problem

2008-01-15 Thread Mark J Musante
On Mon, 14 Jan 2008, Wyllys Ingersoll wrote:

 That doesn't work either.

The zpool replace command didn't work?  You wouldn't happen to have a copy 
of the errors you received, would you?  I'd like to see that.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN

2008-01-15 Thread Kory Wheatley
What would be the commands for the three way mirror or an example of what your 
describing. I thought the 200gb would have to be the same size to attach to the 
existing mirror and you would have to attach two LUN disks vs one LUN.  Once it 
attaches it automatically reslivers or syncs the disk then if I wanted to I 
could remove the two 73 GB disks or still keep them in the pool and expand the 
pool later if I want?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN

2008-01-15 Thread Ellis, Mike
Use zpool replace to swap one side of the mirror with the iscsi lun.

-- mikee


- Original Message -
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
Sent: Tue Jan 15 08:46:40 2008
Subject: Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN

What would be the commands for the three way mirror or an example of what your 
describing. I thought the 200gb would have to be the same size to attach to the 
existing mirror and you would have to attach two LUN disks vs one LUN.  Once it 
attaches it automatically reslivers or syncs the disk then if I wanted to I 
could remove the two 73 GB disks or still keep them in the pool and expand the 
pool later if I want?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN

2008-01-15 Thread Robert Milkowski
Hello Kory,

Tuesday, January 15, 2008, 1:46:40 PM, you wrote:

KW What would be the commands for the three way mirror or an example
KW of what your describing. I thought the 200gb would have to be the
KW same size to attach to the existing mirror and you would have to
KW attach two LUN disks vs one LUN.  Once it attaches it
KW automatically reslivers or syncs the disk then if I wanted to I
KW could remove the two 73 GB disks or still keep them in the pool
KW and expand the pool later if I want?
KW  
KW  

No, if you are attaching another disk to mirror it doesn't have to be
the same size - it can be bigger, however you won't be able to see all
the space from the disk as long as it forms N-way mirror with smaller
devices.

And yes, if you add (attach) another disk to a mirror it will
automatically resilver, and you can keep previous two disks - you will
get 3-way mirror (you can create N-way mirror in general).
Once you happy new disk is working properly you just remove (detach)
two old disk and your pool automatically grows.

Keep in mind that the reverse is not possible (yet).
Below example showing your case.


# mkfile 512m disk1
# mkfile 512m disk2
# mkfile 1024m disk3

# zpool create test mirror /root/disk1 /root/disk2
# zpool status
  pool: test
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
test ONLINE   0 0 0
  mirror ONLINE   0 0 0
/root/disk1  ONLINE   0 0 0
/root/disk2  ONLINE   0 0 0


# cp -rp /lib/ /test/
# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
test504M   83.2M421M16%  ONLINE -
#

# zpool attach test /root/disk2 /root/disk3

# zpool status
  pool: test
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 69.59% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
test ONLINE   0 0 0
  mirror ONLINE   0 0 0
/root/disk1  ONLINE   0 0 0
/root/disk2  ONLINE   0 0 0
/root/disk3  ONLINE   0 0 0

errors: No known data errors
#

Waiting for a re-silvering to complete.

# zpool status
  pool: test
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Jan 15 14:41:13 2008
config:

NAME STATE READ WRITE CKSUM
test ONLINE   0 0 0
  mirror ONLINE   0 0 0
/root/disk1  ONLINE   0 0 0
/root/disk2  ONLINE   0 0 0
/root/disk3  ONLINE   0 0 0

errors: No known data errors
#
# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
test504M   83.3M421M16%  ONLINE -
#

# zpool detach test /root/disk1
# zpool detach test /root/disk2

# zpool status
  pool: test
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Jan 15 14:41:13 2008
config:

NAME   STATE READ WRITE CKSUM
test   ONLINE   0 0 0
  /root/disk3  ONLINE   0 0 0

errors: No known data errors
#

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
test   1016M   83.2M933M 8%  ONLINE -
#


so we've migrated data from a 2-way mirror to a just one disk, live
without unmounting file systems, etc. If your 3rd disk is already
protected if you follow above procedure you will have a protected
configuration all the time (however at the end you relay on disk3
built-in redundancy).





-- 
Best regards,
 Robert Milkowski   mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Sengor
Veritas products tend to work best with... well... other Veritas products.

On 1/11/08, Patrick Herman [EMAIL PROTECTED] wrote:

 Hello experts,


 We have a large implementation of Symantec Netbackup 6.0 with disk staging. 
 Today, the customer is using VxFS as file system inside Netbackup 6.0 DSSU 
 (disk staging).

 The customer would like to know if it is best to use ZFS or VxFS as file 
 system inside Netbackup disk staging in order to get the best performance 
 possible.

 Could you provide some information regarding this topic?


 Thanks in advance for your help

 Regards

 Patrick
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
_/ sengork.blogspot.com /
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Selim Daoud
with zfs you can compress data on disk ...that is a grat advantage
when doing backup to disk
also, for DSSU you need to multiply number of filesystem (1 fs per
stu), the advantage of zfs is that you don't need to fix the size
of the fs upfront  (the space is shared among all the fs)
s-

On Jan 10, 2008 2:12 PM, Patrick Herman [EMAIL PROTECTED] wrote:

 Hello experts,


 We have a large implementation of Symantec Netbackup 6.0 with disk staging. 
 Today, the customer is using VxFS as file system inside Netbackup 6.0 DSSU 
 (disk staging).

 The customer would like to know if it is best to use ZFS or VxFS as file 
 system inside Netbackup disk staging in order to get the best performance 
 possible.

 Could you provide some information regarding this topic?


 Thanks in advance for your help

 Regards

 Patrick
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
--
Blog: http://fakoli.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Paul Kraus
On 1/15/08, Selim Daoud [EMAIL PROTECTED] wrote:

 with zfs you can compress data on disk ...that is a grat advantage
 when doing backup to disk
 also, for DSSU you need to multiply number of filesystem (1 fs per
 stu), the advantage of zfs is that you don't need to fix the size
 of the fs upfront  (the space is shared among all the fs)

But ... NBU (at least version 6.0) attempts to estimate the
size of the backup and make suer there is enough room on the DSSU to
handle it. What happens when the free space reported by ZFS isn't
really the free space ?

We are using NBU DSSU against both UFS and ZFS (but not
against VxFS) and have not noticed any FS related performance
limitations. The clients and the network are all slower.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Sound Designer, Noel Coward's Hay Fever
@ Albany Civic Theatre, Feb./Mar. 2008
- Facilities Coordinator, Albacon 2008
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?REFER in zfs list

2008-01-15 Thread Robert Milkowski
Hello Kevin,

Tuesday, January 15, 2008, 7:53:47 PM, you wrote:

KR What does the REFER column represent in zfs list.

How much data given dataset is referring to - in other words its a
disk usage for that file system (not counting snapshots iirc).

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Sri Sudarsan
Regarding the question asked below namely What happens when the free 
space reported by ZFS isn't really the free space ?, is there an open 
bug for this ?

Thanks,

Sri
Paul Kraus wrote:
 On 1/15/08, Selim Daoud [EMAIL PROTECTED] wrote:

   
 with zfs you can compress data on disk ...that is a grat advantage
 when doing backup to disk
 also, for DSSU you need to multiply number of filesystem (1 fs per
 stu), the advantage of zfs is that you don't need to fix the size
 of the fs upfront  (the space is shared among all the fs)
 

 But ... NBU (at least version 6.0) attempts to estimate the
 size of the backup and make suer there is enough room on the DSSU to
 handle it. What happens when the free space reported by ZFS isn't
 really the free space ?

 We are using NBU DSSU against both UFS and ZFS (but not
 against VxFS) and have not noticed any FS related performance
 limitations. The clients and the network are all slower.

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Richard Elling
Sri Sudarsan wrote:
 Regarding the question asked below namely What happens when the free 
 space reported by ZFS isn't really the free space ?, is there an open 
 bug for this ?
   

Not a bug.  It is a result of the dynamic nature of ZFS.  For example,
when compression is enabled, we cannot tell in advance how well
the data will compress, so how could we say how much space is
available?  Other items to consider: dynamically allocated, redundant,
and compressed metadata; snapshots; multiple file systems in a pool,
each with potentially different features including compression
algorithms and data redundancy; clones; failed media; failed devices;
etc. Kinda reminds me of the old question: how much stuff can you
put into a hole in your pocket?
 -- richard

 Thanks,

 Sri
 Paul Kraus wrote:
   
 On 1/15/08, Selim Daoud [EMAIL PROTECTED] wrote:

   
 
 with zfs you can compress data on disk ...that is a grat advantage
 when doing backup to disk
 also, for DSSU you need to multiply number of filesystem (1 fs per
 stu), the advantage of zfs is that you don't need to fix the size
 of the fs upfront  (the space is shared among all the fs)
 
   
 But ... NBU (at least version 6.0) attempts to estimate the
 size of the backup and make suer there is enough room on the DSSU to
 handle it. What happens when the free space reported by ZFS isn't
 really the free space ?

 We are using NBU DSSU against both UFS and ZFS (but not
 against VxFS) and have not noticed any FS related performance
 limitations. The clients and the network are all slower.

   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 01/15/2008 03:04:15 PM:

 Sri
 Paul Kraus wrote:
  On 1/15/08, Selim Daoud [EMAIL PROTECTED] wrote:
 
 
  with zfs you can compress data on disk ...that is a grat advantage
  when doing backup to disk
  also, for DSSU you need to multiply number of filesystem (1 fs per
  stu), the advantage of zfs is that you don't need to fix the size
  of the fs upfront  (the space is shared among all the fs)
 
 
  But ... NBU (at least version 6.0) attempts to estimate the
  size of the backup and make suer there is enough room on the DSSU to
  handle it. What happens when the free space reported by ZFS isn't
  really the free space ?
 
  We are using NBU DSSU against both UFS and ZFS (but not
  against VxFS) and have not noticed any FS related performance
  limitations. The clients and the network are all slower.
 
 
 Regarding the question asked below namely What happens when the free
 space reported by ZFS isn't really the free space ?, is there an open
 bug for this ?


I do not believe it is a ZFS bug.  Consider:

The NetBackup server scans a backup client system,
It determines it will need 600gb of disk space on the disk store.
It stats the zfs volume and sees there is 700 gb free (enough for the
backup)
Starts writing 600gb over multiple hours.
in the meantime, 500gb is used elsewhere in the pool.
NetBackup Fails differently that on vmfs+vxvm in this case?

Isn't it NetBackups issue to make sure that it has reserved diskspace or at
least checks for space _as_ it writes?

-Wade




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS versus VxFS as file system inside Netbackup 6.0 DSSU

2008-01-15 Thread Selim Daoud
AFAIK , nbu does of estimated the size of backup prior starting the job.
as the backup job is split in fixed-size segments , if a segment does
not fit, it will try to backup into
another disk or will wait for more space

On Jan 15, 2008 8:42 PM, Paul Kraus [EMAIL PROTECTED] wrote:
 On 1/15/08, Selim Daoud [EMAIL PROTECTED] wrote:

  with zfs you can compress data on disk ...that is a grat advantage
  when doing backup to disk
  also, for DSSU you need to multiply number of filesystem (1 fs per
  stu), the advantage of zfs is that you don't need to fix the size
  of the fs upfront  (the space is shared among all the fs)

 But ... NBU (at least version 6.0) attempts to estimate the
 size of the backup and make suer there is enough room on the DSSU to
 handle it. What happens when the free space reported by ZFS isn't
 really the free space ?

 We are using NBU DSSU against both UFS and ZFS (but not
 against VxFS) and have not noticed any FS related performance
 limitations. The clients and the network are all slower.

 --
 {1-2-3-4-5-6-7-}
 Paul Kraus
 - Sound Designer, Noel Coward's Hay Fever
 @ Albany Civic Theatre, Feb./Mar. 2008
 - Facilities Coordinator, Albacon 2008

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
--
Blog: http://fakoli.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Removing An Errant Drive From Zpool

2008-01-15 Thread Ben Rockwood
I made a really stupid mistake... having trouble removing a hot spare 
marked as failed I was trying several ways to put it back in a good 
state.  One means I tried was to 'zpool add pool c5t3d0'... but I forgot 
to use the proper syntax zpool add pool spare c5t3d0.

Now I'm in a bind.  I've got 4 large raidz2's and now this punty 500GB 
drive in the config:

...
  raidz2ONLINE   0 0 0
c5t7d0  ONLINE   0 0 0
c5t2d0  ONLINE   0 0 0
c7t7d0  ONLINE   0 0 0
c6t7d0  ONLINE   0 0 0
c1t7d0  ONLINE   0 0 0
c0t7d0  ONLINE   0 0 0
c4t3d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
c6t3d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
  c5t3d0ONLINE   0 0 0
spares
  c5t3d0FAULTED   corrupted data
  c4t7d0AVAIL  
...



Detach and Remove won't work.  Does anyone know of a way to get that 
c5t3d0 out of the data configuration and back to hot-spare where it belongs?

However if I understand the layout properly, this should not have an 
adverse impact on my existing configuration I think.  If I can't 
dump it, what happens when that disk fills up?

I can't believe I made such a bone headed mistake.  This is one of those 
times when a Are you sure you...? would be helpful. :(

benr.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Removing An Errant Drive From Zpool

2008-01-15 Thread Eric Schrock
There's really no way to recover from this, since we don't have device
removal.  However, I'm suprised that no warning was given.  There are at
least two things that should have happened:

1. zpool(1M) should have warned you that the redundancy level you were
   attempting did not match that of your existing pool.  This doesn't
   apply if you already have a mixed level of redundancy.

2. zpool(1M) should have warned you that the device was in use as an
   active spare and not let you continue.

What bits were you running?

- Eric

On Tue, Jan 15, 2008 at 06:25:50PM -0800, Ben Rockwood wrote:
 I made a really stupid mistake... having trouble removing a hot spare 
 marked as failed I was trying several ways to put it back in a good 
 state.  One means I tried was to 'zpool add pool c5t3d0'... but I forgot 
 to use the proper syntax zpool add pool spare c5t3d0.
 
 Now I'm in a bind.  I've got 4 large raidz2's and now this punty 500GB 
 drive in the config:
 
 ...
   raidz2ONLINE   0 0 0
 c5t7d0  ONLINE   0 0 0
 c5t2d0  ONLINE   0 0 0
 c7t7d0  ONLINE   0 0 0
 c6t7d0  ONLINE   0 0 0
 c1t7d0  ONLINE   0 0 0
 c0t7d0  ONLINE   0 0 0
 c4t3d0  ONLINE   0 0 0
 c7t3d0  ONLINE   0 0 0
 c6t3d0  ONLINE   0 0 0
 c1t3d0  ONLINE   0 0 0
 c0t3d0  ONLINE   0 0 0
   c5t3d0ONLINE   0 0 0
 spares
   c5t3d0FAULTED   corrupted data
   c4t7d0AVAIL  
 ...
 
 
 
 Detach and Remove won't work.  Does anyone know of a way to get that 
 c5t3d0 out of the data configuration and back to hot-spare where it belongs?
 
 However if I understand the layout properly, this should not have an 
 adverse impact on my existing configuration I think.  If I can't 
 dump it, what happens when that disk fills up?
 
 I can't believe I made such a bone headed mistake.  This is one of those 
 times when a Are you sure you...? would be helpful. :(
 
 benr.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Cheap ZFS homeserver.

2008-01-15 Thread Marcus Sundman
 So I was hoping that this board would work: [...]GA-M57SLI-S4

I've been looking at that very same board for the very same purpose. It
has 2 gb nics, 6 sata ports, supports ECC memory and is passively
cooled. And it's very cheap compared to most systems that people
recommend for running OpenSolaris on. (A GA-M57SLI-S4, an Athlon64
LE-1620 and 2 * 1GB 800MHz DDR2 ECC all together sum up to a total of
only 165-175 € here, which is a lot less than what the recommended SATA
cards cost. Add 3 500GB disks and you have a pretty nice raid-z system
for only a total of 440 € (assuming you already have a case and PSU,
which I do). Or you could use 3 1TB disks instead and add a good UPS
and still have the whole package for less than 1000 €.)

There are not many reports about the nforce 570 sli chipset, but
several people have got the nforce 570 chipset working without problems.

Here is a system with the GA-M57SLI-S4 in the HCL:
http://www.sun.com/bigadmin/hcl/data/systems/details/2714.html
It says the SATA ports run in Legacy Mode (which means no hotswap or
NCQ, but I don't know if it has any other downsides, anyone?) in Solaris
Express Developer Edition 05/07. However, there seems to have been new
MCP55 (all nf570 are mcp55-based) drivers released since then:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6296435

Has anyone tested the new mcp55 drivers with the sata ports on an
nforce 570 sli motherboard?


- Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Removing An Errant Drive From Zpool

2008-01-15 Thread Ben Rockwood

Eric Schrock wrote:
 There's really no way to recover from this, since we don't have device
 removal.  However, I'm suprised that no warning was given.  There are at
 least two things that should have happened:

 1. zpool(1M) should have warned you that the redundancy level you were
attempting did not match that of your existing pool.  This doesn't
apply if you already have a mixed level of redundancy.

 2. zpool(1M) should have warned you that the device was in use as an
active spare and not let you continue.

 What bits were you running?
   

snv_78, however the pool was created on snv_43 and hasn't yet been 
upgraded.  Though, programatically, I can't see why there would be a 
difference in the way 'zpool' would handle the check.

The big question is, if I'm stuck like the permanently, whats the 
potential risk?

Could I potentially just fail that drive and leave it in a failed state?

benr.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RFE: File revisions on ZFS

2008-01-15 Thread Paul
Wouldn't it be nice to have file revisions implemented in ZFS?
Mainframe FS (e.g. MVS) have this, and I think it should not be too hard to 
implement this in ZFS.

Use Cases:
* Simple configuration management, below SCCS etc.
* Simple built-in FS trash bin
* Set # of revisions high for user homes and /var, even higher for /etc FS, set 
it low for /usr.
* you name it

Benefits:
* less administrative costs, e.g. cfg mgmt for the average case
* trash bin indepentant of GUI
* you name it

Functionality:
* The attibute 'active_revision' allows me to retrieve an old copy of a file.  
As long I do not alter it's contents, I can retrieve younger revisions.  On 
write, all younger revisions are discarded (keep it simple).
* I can label a revision with a tag, similar to well known VS's

Discussion topics:
* disk space comsumption is ruled by a simple logic: min_revisions overrules 
min_free_space?
* you name it

Quick brainsstorm:
We would need a few new FS and file attributes, and some functions:

revisions=on|off
rev:compress=on|off|lzjb|gzip # inherited from the FS compression
rev:max_revisions=integer|none (default)|unlimited
rev:min_revisions=integer|none (default)
rev:min_free=integer[specifier]  # spec can the usual b,k,M,G,T, % 
or none

and for convenience a few file attributes alike:
rev:max_revisions,rev:min_revisions,rev:revisions (ro), rev:active_revision, 
rev:tag

Any comments?  Paul
---
$ locate groupsex
/opt/gnome/lib/epiphany/1.8/extensions/libtabgroupsextension.so
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss