Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread Joe Little

On 12/12/06, James F. Hranicky <[EMAIL PROTECTED]> wrote:

Jim Davis wrote:

>> Have you tried using the automounter as suggested by the linux faq?:
>> http://nfs.sourceforge.net/#section_b
>
> Yes.  On our undergrad timesharing system (~1300 logins) we actually hit
> that limit with a standard automounting scheme.  So now we make static
> mounts of the Netapp /home space and then use amd to make symlinks to
> the home directories.  Ugly, but it works.

This is how we've always done it, but we use amd (am-utils) to manage two
maps, a filesystem map and a homes map. The homes map is of all type:=link,
so amd handles the link creation for us, plus we only have a handful of
mounts on any system.

It looks like if each user has a ZFS quota-ed home directory which acts as
its own little filesystem, we won't be able to do this anymore, as we'll have
to export and mount each user directory separately. Is this the case, or is
there a way to export and mount a volume containing zfs quota-ed directories,
i.e., have the quota-ed subdirs not necessarily act like they're separate
filesystems?



This is definitely a feature I'd love to see, whereby one can share
the filesystem at a higher point in the tree (aka /pool/a/b, sharing
/pool/a, but have "b" as its own filesystem). I know this breaks some
of the sharing, but I'd love to have clients be able to mount /pool/a
and by way of that see b as well and not have that treated as a
separate share.



Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread James F. Hranicky
Jim Davis wrote:

>> Have you tried using the automounter as suggested by the linux faq?:
>> http://nfs.sourceforge.net/#section_b
> 
> Yes.  On our undergrad timesharing system (~1300 logins) we actually hit
> that limit with a standard automounting scheme.  So now we make static
> mounts of the Netapp /home space and then use amd to make symlinks to
> the home directories.  Ugly, but it works.

This is how we've always done it, but we use amd (am-utils) to manage two
maps, a filesystem map and a homes map. The homes map is of all type:=link,
so amd handles the link creation for us, plus we only have a handful of
mounts on any system.

It looks like if each user has a ZFS quota-ed home directory which acts as
its own little filesystem, we won't be able to do this anymore, as we'll have
to export and mount each user directory separately. Is this the case, or is
there a way to export and mount a volume containing zfs quota-ed directories,
i.e., have the quota-ed subdirs not necessarily act like they're separate
filesystems?

Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread Robert Milkowski
Hello Jim,

Wednesday, December 6, 2006, 3:28:53 PM, you wrote:

JD> We have two aging Netapp filers and can't afford to buy new Netapp gear,
JD> so we've been looking with a lot of interest at building NFS fileservers
JD> running ZFS as a possible future approach.  Two issues have come up in the
JD> discussion

JD> - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
JD> nicely).  Mirroring is an alternative, but when you're on a tight budget
JD> losing N/2 disk capacity is painful.

Actually you can add another raid-z group to the pool.
I belive it's the same what NetApp is doing (instead of actually
growing raid group).

JD> - The default scheme of one filesystem per user runs into problems with
JD> linux NFS clients; on one linux system, with 1300 logins, we already have
JD> to do symlinks with amd because linux systems can't mount more than about
JD> 255 filesystems at once.  We can of course just have one filesystem 
JD> exported, and make /home/student a subdirectory of that, but then we run
JD> into problems with quotas -- and on an undergraduate fileserver, quotas
JD> aren't optional!

It can with 2.6 kernels.
However there're other problems we we ended-up with limit at around
700.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-09 Thread Richard Elling

Jim Davis wrote:

eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across 
the RAID-Zs?  Your capacity and performance will go up with each 
RAID-Z vdev you add.


Thanks, that's an interesting suggestion.


This has the benefit of allowing you to grow into your storage.
Also, a raid-z 3-vdev set has better reliability than a 4-vdev set.
The performance will be about the same, so if you have 12 vdevs, four
3-vdev sets will perform better and be more reliable than three 4-vdev
sets.  The available space will be smaller, there is no free lunch.
 -- richard


Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b


Yes.  On our undergrad timesharing system (~1300 logins) we actually hit 
that limit with a standard automounting scheme.  So now we make static 
mounts of the Netapp /home space and then use amd to make symlinks to 
the home directories.  Ugly, but it works.



Solaris folks shouldn't laugh to hard, SunOS 4 had an artificial limit
for the number of client mount points too -- a bug which only read 8kBytes
from the mnttab; if mnttab overflowed you hung.  Fixed many, many years ago
and now mnttab is not actually a file at all ;-)

 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Eric Kustarz

Jim Davis wrote:

eric kustarz wrote:



What about adding a whole new RAID-Z vdev and dynamicly stripe across 
the RAID-Zs?  Your capacity and performance will go up with each 
RAID-Z vdev you add.



Thanks, that's an interesting suggestion.



Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b



Yes.  On our undergrad timesharing system (~1300 logins) we actually hit 
that limit with a standard automounting scheme.  So now we make static 
mounts of the Netapp /home space and then use amd to make symlinks to 
the home directories.  Ugly, but it works.


Ug indeed.





Also, ask for reasoning/schedule on when they are going to fix this on 
the linux NFS alias (i believe its [EMAIL PROTECTED] ).  Trond 
should be able to help you.



It's item 9 (last) on their "medium priority" list, according to 
http://www.linux-nfs.org/priorities.html.  That doesn't sound like a fix 
is coming soon.


Hmm, looks like that list is a little out of date, i'll ask trond to 
update it.





If going to OpenSolaris clients is not an option, then i would be 
curious to know why.



Ah, well... it was a Solaris system for many years.  And we were mostly 
a Solaris shop for many years.  Then Sun hardware got too pricey, and 
fast Intel systems got cheap but at the time Solaris support for them 
lagged and Linux matured and...  and now Linux is entrenched. It's a 
story other departments here could tell.  And at other universities too 
I'll bet.  So the reality is we have to make whatever we run on our 
servers play well with Linux clients.


Ok, can i ask a favor then?  Could you try one OpenSolaris client 
(should work fine on the existing hardware you have) and let us know if 
that works better/worse for you?  And as Ed just mentioned, i would be 
really interested if BrandZ fits your needs (then you could have one+ 
zone with a linux userland and opensolaris kernel).


eric


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Darren J Moffat

Edward Pilatowicz wrote:

On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:

We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach.  Two issues have come up in the
discussion

- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
nicely).  Mirroring is an alternative, but when you're on a tight budget
losing N/2 disk capacity is painful.

- The default scheme of one filesystem per user runs into problems with
linux NFS clients; on one linux system, with 1300 logins, we already have
to do symlinks with amd because linux systems can't mount more than about
255 filesystems at once.  We can of course just have one filesystem
exported, and make /home/student a subdirectory of that, but then we run
into problems with quotas -- and on an undergraduate fileserver, quotas
aren't optional!



well, if the mount limitation is imposed by the linux kernel you might
consider trying running linux in zone on solaris (via BrandZ).  Since
BrandZ allows you to execute linux programs on a solaris kernel you
shoudn't have a problem with limits imposed by the linux kernel.
brandz currently ships in an solaris express (or solaris express
community release) build snv_49 or later.


Another alternative is to pick an OpenSolaris based distribution that 
"looks and feels" more like Linux.  Nexenta might do that for you.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Edward Pilatowicz
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
> We have two aging Netapp filers and can't afford to buy new Netapp gear,
> so we've been looking with a lot of interest at building NFS fileservers
> running ZFS as a possible future approach.  Two issues have come up in the
> discussion
>
> - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
> nicely).  Mirroring is an alternative, but when you're on a tight budget
> losing N/2 disk capacity is painful.
>
> - The default scheme of one filesystem per user runs into problems with
> linux NFS clients; on one linux system, with 1300 logins, we already have
> to do symlinks with amd because linux systems can't mount more than about
> 255 filesystems at once.  We can of course just have one filesystem
> exported, and make /home/student a subdirectory of that, but then we run
> into problems with quotas -- and on an undergraduate fileserver, quotas
> aren't optional!
>

well, if the mount limitation is imposed by the linux kernel you might
consider trying running linux in zone on solaris (via BrandZ).  Since
BrandZ allows you to execute linux programs on a solaris kernel you
shoudn't have a problem with limits imposed by the linux kernel.
brandz currently ships in an solaris express (or solaris express
community release) build snv_49 or later.

you can find more info on brandz here:
http://opensolaris.org/os/community/brandz/

ed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Jim Davis

eric kustarz wrote:



What about adding a whole new RAID-Z vdev and dynamicly stripe across 
the RAID-Zs?  Your capacity and performance will go up with each RAID-Z 
vdev you add.


Thanks, that's an interesting suggestion.



Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b


Yes.  On our undergrad timesharing system (~1300 logins) we actually hit 
that limit with a standard automounting scheme.  So now we make static 
mounts of the Netapp /home space and then use amd to make symlinks to 
the home directories.  Ugly, but it works.




Also, ask for reasoning/schedule on when they are going to fix this on 
the linux NFS alias (i believe its [EMAIL PROTECTED] ).  Trond 
should be able to help you.


It's item 9 (last) on their "medium priority" list, according to 
http://www.linux-nfs.org/priorities.html.  That doesn't sound like a fix 
is coming soon.



If going to OpenSolaris clients is not an 
option, then i would be curious to know why.


Ah, well... it was a Solaris system for many years.  And we were mostly 
a Solaris shop for many years.  Then Sun hardware got too pricey, and 
fast Intel systems got cheap but at the time Solaris support for them 
lagged and Linux matured and...  and now Linux is entrenched. It's a 
story other departments here could tell.  And at other universities too 
I'll bet.  So the reality is we have to make whatever we run on our 
servers play well with Linux clients.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread eric kustarz

Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear, 
so we've been looking with a lot of interest at building NFS fileservers 
running ZFS as a possible future approach.  Two issues have come up in 
the discussion


- Adding new disks to a RAID-Z pool (Netapps handle adding new disks 
very nicely).  Mirroring is an alternative, but when you're on a tight 
budget losing N/2 disk capacity is painful.


What about adding a whole new RAID-Z vdev and dynamicly stripe across 
the RAID-Zs?  Your capacity and performance will go up with each RAID-Z 
vdev you add.


Such as:
# zpool create swim raidz /var/tmp/dev1 /var/tmp/dev2 /var/tmp/dev3
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
swim   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev1  ONLINE   0 0 0
/var/tmp/dev2  ONLINE   0 0 0
/var/tmp/dev3  ONLINE   0 0 0

errors: No known data errors
# zpool add swim raidz /var/tmp/dev4 /var/tmp/dev5 /var/tmp/dev6
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
swim   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev1  ONLINE   0 0 0
/var/tmp/dev2  ONLINE   0 0 0
/var/tmp/dev3  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev4  ONLINE   0 0 0
/var/tmp/dev5  ONLINE   0 0 0
/var/tmp/dev6  ONLINE   0 0 0

errors: No known data errors
#
# zpool add swim raidz /var/tmp/dev7 /var/tmp/dev8 /var/tmp/dev9
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
swim   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev1  ONLINE   0 0 0
/var/tmp/dev2  ONLINE   0 0 0
/var/tmp/dev3  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev4  ONLINE   0 0 0
/var/tmp/dev5  ONLINE   0 0 0
/var/tmp/dev6  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev7  ONLINE   0 0 0
/var/tmp/dev8  ONLINE   0 0 0
/var/tmp/dev9  ONLINE   0 0 0

errors: No known data errors
#




- The default scheme of one filesystem per user runs into problems with 
linux NFS clients; on one linux system, with 1300 logins, we already 
have to do symlinks with amd because linux systems can't mount more than 
about 255 filesystems at once.  We can of course just have one 
filesystem exported, and make /home/student a subdirectory of that, but 
then we run into problems with quotas -- and on an undergraduate 
fileserver, quotas aren't optional!


Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b

Look for section "B3. Why can't I mount more than 255 NFS file systems 
on my client? Why is it sometimes even less than 255?".


Let us know if that works or doesn't work.

Also, ask for reasoning/schedule on when they are going to fix this on 
the linux NFS alias (i believe its [EMAIL PROTECTED] ).  Trond 
should be able to help you.  If going to OpenSolaris clients is not an 
option, then i would be curious to know why.


eric



Neither of these problems are necessarily showstoppers, but both make 
the transition more difficult.  Any progress that could be made with 
them would help sites like us make the switch sooner.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Rob



You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.
 


cd /usr/tmp
mkfile -n 100m 1 2 3 4 5 6 7 8 9 10
zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3
zpool status t
zfs list t
zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6 /usr/tmp/7
zpool status t
zfs list t
zpool add t /usr/tmp/8 spare /usr/tmp/9
zpool status t
zfs list t
zpool attach t /usr/tmp/8 /usr/tmp/10
zpool status t
zfs list t
sleep 10
rm /usr/tmp/5
zpool scrub t
sleep 3
zpool status t
mkfile -n 100m 5
zpool replace t /usr/tmp/5
zpool status t
sleep 10
zpool status t
zpool offline t /usr/tmp/1
mkfile -n 200m 1
zpool replace t /usr/tmp/1
zpool status t
sleep 10
zpool status t
zpool offline t /usr/tmp/2
mkfile -n 200m 2
zpool  replace t /usr/tmp/2
zfs list t
sleep 10
zpool offline t /usr/tmp/3
mkfile -n 200m 3
zpool replace t /usr/tmp/3
sleep 10
zfs list t
zpool destroy t
rm 1 2 3 4 5 6 7 8 9 10

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Casper . Dik

>- The default scheme of one filesystem per user runs into problems with 
>linux NFS clients; on one linux system, with 1300 logins, we already have 
>to do symlinks with amd because linux systems can't mount more than about 
>255 filesystems at once.  We can of course just have one filesystem 
>exported, and make /home/student a subdirectory of that, but then we run 
>into problems with quotas -- and on an undergraduate fileserver, quotas 
>aren't optional!

He, you have the Linux source so fix that :-)

Or just run Solaris on the NFS clients :-)...



You can grow a RAID-Z pool but only by adding another set of disks, not
one disk at a time.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Darren J Moffat

On Wed, 6 Dec 2006, Jim Davis wrote:

We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach.  Two issues have come up in the
discussion

- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
nicely).  Mirroring is an alternative, but when you're on a tight budget
losing N/2 disk capacity is painful.


You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.

The following config was done in two steps:

$ zpool status
  pool: cube
 state: ONLINE
 scrub: scrub completed with 0 errors on Mon Dec  4 03:52:18 2006
config:

NAME STATE READ WRITE CKSUM
cube ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c5t0d0   ONLINE   0 0 0
c5t1d0   ONLINE   0 0 0
c5t2d0   ONLINE   0 0 0
c5t3d0   ONLINE   0 0 0
c5t4d0   ONLINE   0 0 0
c5t5d0   ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c5t8d0   ONLINE   0 0 0
c5t9d0   ONLINE   0 0 0
c5t10d0  ONLINE   0 0 0
c5t11d0  ONLINE   0 0 0
c5t12d0  ONLINE   0 0 0
c5t13d0  ONLINE   0 0 0


The targets t0 through t5 included were added initially, many days
later the targets t8 through t13 were added.

The fact that these are all the same controller isn't relevant.

This is actually what you want with raid-z anyway, in may case above
it wouldn't be good for performance to have 12 disks in the top level
raid-z.


- The default scheme of one filesystem per user runs into problems with
linux NFS clients; on one linux system, with 1300 logins, we already have
to do symlinks with amd because linux systems can't mount more than about
255 filesystems at once.  We can of course just have one filesystem
exported, and make /home/student a subdirectory of that, but then we run
into problems with quotas -- and on an undergraduate fileserver, quotas
aren't optional!


So how can OpenSolaris help you with a Linux kernel restriction
on the number of mounts ?

Hey I know, get rid of the Linux boxes and replace them with OpenSolaris
based ones ;-)

Seriously, what are you expecting OpenSolaris and ZFS/NFS in particular 
to be able to do about a restriction in Linux ?


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Al Hopper
On Wed, 6 Dec 2006, Jim Davis wrote:

> We have two aging Netapp filers and can't afford to buy new Netapp gear,
> so we've been looking with a lot of interest at building NFS fileservers
> running ZFS as a possible future approach.  Two issues have come up in the
> discussion
>
> - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
> nicely).  Mirroring is an alternative, but when you're on a tight budget
> losing N/2 disk capacity is painful.
>
> - The default scheme of one filesystem per user runs into problems with
> linux NFS clients; on one linux system, with 1300 logins, we already have
> to do symlinks with amd because linux systems can't mount more than about
> 255 filesystems at once.  We can of course just have one filesystem
> exported, and make /home/student a subdirectory of that, but then we run
> into problems with quotas -- and on an undergraduate fileserver, quotas
> aren't optional!
>
> Neither of these problems are necessarily showstoppers, but both make the
> transition more difficult.  Any progress that could be made with them
> would help sites like us make the switch sooner.

The showstopper might be performance - since the Netapp has nonvolatile
memory - which greatly accelerates NFS operations.  A good strategy is to
build a ZFS test system and determine if it provides the NFS performance
you expect in your environment.  Remember that ZFS "likes" inexpensive
SATA disk drives - so a test system will be kind to your budget and the
hardware is re-usable when you decide to deploy ZFS.  And you may very
well find other, unintended uses for that "test" system.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Jim Davis
We have two aging Netapp filers and can't afford to buy new Netapp gear, 
so we've been looking with a lot of interest at building NFS fileservers 
running ZFS as a possible future approach.  Two issues have come up in the 
discussion


- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very 
nicely).  Mirroring is an alternative, but when you're on a tight budget 
losing N/2 disk capacity is painful.


- The default scheme of one filesystem per user runs into problems with 
linux NFS clients; on one linux system, with 1300 logins, we already have 
to do symlinks with amd because linux systems can't mount more than about 
255 filesystems at once.  We can of course just have one filesystem 
exported, and make /home/student a subdirectory of that, but then we run 
into problems with quotas -- and on an undergraduate fileserver, quotas 
aren't optional!


Neither of these problems are necessarily showstoppers, but both make the 
transition more difficult.  Any progress that could be made with them 
would help sites like us make the switch sooner.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss