Re: [zfs-discuss] zfs

2007-04-16 Thread Victor Latushkin

Hi Roman,

from the provided data I suppose that you a running unpatched Solaris 10 
Update 3.


Since fault address is 0xc4 and in zio_create we manipulate mostly with 
zio_t structures, then 0xc4 most likely corresponds to io_child member 
of zio_t structure. If my assumption about Solaris update is correct, 
then corresponding piece of code is:


zio_create+0x133:   movl   -0x4(%ebp),%edx
zio_create+0x136:   movl   0x1f0(%edx),%eax
zio_create+0x13c:   movl   0x1f4(%edx),%ecx
zio_create+0x142:   addl   $0x1,%eax
zio_create+0x145:   adcl   $0x0,%ecx
zio_create+0x148:   movl   %eax,0x1f0(%edx)
zio_create+0x14e:   movl   %ecx,0x1f4(%edx)
zio_create+0x154:   movl   0xc4(%edx),%eax
zio_create+0x15a:   movl   %eax,0xcc(%ebx)
zio_create+0x160:   movl   $0x0,0xc8(%ebx)
zio_create+0x16a:   movl   0xc4(%edx),%eax
zio_create+0x170:   testl  %eax,%eax
zio_create+0x172:   je +0x8 zio_create+0x17a
zio_create+0x174:   movl   %ebx,0xc8(%eax)
zio_create+0x17a:   movl   -0x4(%ebp),%eax
zio_create+0x17d:   movl   %ebx,0xc4(%eax)
zio_create+0x183:   movl   0x34(%eax),%eax
zio_create+0x186:   movl   %eax,0x34(%ebx)
zio_create+0x189:   pushl  %esi
zio_create+0x18a:   call   +0x555fe07   mutex_exit

Failure offset +0x17d corresponds to line 371 of zio.c
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zio.c#371

Register %ebx contains address of newly allocates zio_t structure. At 
offset +0x17d we are trying to store %ebx into address %eax+0xc4, and 
since %eax is 0, we end up with page fault. We load %eax one instruction 
earlier from stack, so most probably something have overwritten stack 
very recently, since at offset 0x133 we loaded from that stack location 
 into %edx (it was 0xe83edac0) and successfully dereferenced that 
address several times.



Cheers,
Victor

Roman Chervotkin wrote:

Hi.
My system crashed today. System reboots without a problem and now everything looks as usual.  
By the way I used ztune.sh to tune parameters several days ago so the problem may be related to that script


Is that a zfs issue or something different?

Thanks, 
Roman

---
-bash-3.00#  more /var/adm/messages

...
Apr 15 09:13:53 server3 unix: [ID 836849 kern.notice] 
Apr 15 09:13:53 server3 ^Mpanic[cpu1]/thread=dc28d600: 
Apr 15 09:13:53 server3 genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ecc53b9c addr=c4 occurred in module zfs due to a NULL pointer dereference
Apr 15 09:13:53 server3 unix: [ID 10 kern.notice] 
Apr 15 09:13:53 server3 unix: [ID 839527 kern.notice] postgres: 
Apr 15 09:13:53 server3 unix: [ID 753105 kern.notice] #pf Page fault

Apr 15 09:13:53 server3 unix: [ID 532287 kern.notice] Bad kernel fault at 
addr=0xc4
Apr 15 09:13:53 server3 unix: [ID 243837 kern.notice] pid=16204, pc=0xf7eda18c, 
sp=0x0, eflags=0x10282
Apr 15 09:13:53 server3 unix: [ID 211416 kern.notice] cr0: 
80050033pg,wp,ne,et,mp,pe cr4: 6d8xmme,fxsr,pge,mce,pse,de
Apr 15 09:13:53 server3 unix: [ID 936844 kern.notice] cr2: c4 cr3: 66e8e000
Apr 15 09:13:53 server3 unix: [ID 537610 kern.notice]gs:  1b0  fs: 
e339  es: ecc50160  ds: f7ed0160
Apr 15 09:13:53 server3 unix: [ID 537610 kern.notice]   edi:d esi: 
e83edcbc ebp: ecc53bfc esp: ecc53bd4
Apr 15 09:13:53 server3 unix: [ID 537610 kern.notice]   ebx: e3390cc0 edx: 
e83edac0 ecx:0 eax:0
Apr 15 09:13:53 server3 unix: [ID 537610 kern.notice]   trp:e err:  
  2 eip: f7eda18c  cs:  158
Apr 15 09:13:53 server3 unix: [ID 717149 kern.notice]   efl:10282 usp:  
  0  ss: eb6f46d8
Apr 15 09:13:53 server3 unix: [ID 10 kern.notice] 
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53afc unix:die+a7 (e, ecc53b9c, c4, 1)

Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53b88 
unix:trap+103f (ecc53b9c, c4, 1)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53b9c 
unix:_cmntrap+9a (1b0, e339, ecc5)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53bfc 
zfs:zio_create+17d (e83edac0, d8969900,)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53c44 
zfs:zio_vdev_child_io+67 (e83edac0, e7d27d8c,)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53c98 
zfs:vdev_mirror_io_start+14d (e83edac0, ecc53cc8,)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53ca4 
zfs:vdev_io_start+15 (e83edac0)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53cc8 
zfs:zio_vdev_io_start+13f (e83edac0)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53cd8 
zfs:zfsctl_ops_root+2044461b (e83edac0, ecc53d04,)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53ce4 
zfs:zio_nowait+b (e83edac0)
Apr 15 09:13:53 server3 genunix: [ID 353471 kern.notice] ecc53d04 

[zfs-discuss] snapshot features

2007-04-16 Thread Selim Daoud

hi all ,

when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach snapshot
functionalities that are offered in other products suchs as Compellent
(http://www.compellent.com/products/software/continuous_snapshots.aspx)

Compellent software allows to set **retention periods** for different
snapshots and will manage their migration or deletion automatically

thanks
s.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Linux

2007-04-16 Thread Joerg Schilling
Nicolas Williams [EMAIL PROTECTED] wrote:

 Sigh.  We have devolved.  Every thread on OpenSolaris discuss lists
 seems to devolve into a license discussion.

It is funny to see that in our case, the tecnical problems (those caused
by the fact that linux implements a different VFS interface layer) are 
creating much bigger problem than the license issue does.


 I have seen mailing list posts (I'd have to search again) that indicate
 [that some believe] that even dynamic linking via dlopen() qualifies as
 making a derivative.

There is no single place in the GPL that mentions the term linking.
For this reason, the GPL FAQ from the FSF is wring as it is based on the
term linking.

There is no difference whether you link statically or dynamically.

Whether using GPLd code from a non-GPLd program creates a derived work
thus cannot depend on whether you link agaist it or not. If a GPLd program
however uses a non-GPLd library, this is definitely not a problem or
every GPLd program linked against the libc from HP-UX would be a problem.


 If true that would mean that one could not distribute an OpenSolaris
 distribution containing a GPLed PAM module.  Or perhaps, because in that
 case the header files needed to make the linking possible are not GPLed
 the linking-makes-derivatives argument would not apply.

If the GPLd PAM module just implements a well known plug in interface,
a program that uses this odule cannot be a derivate of the GPLd code.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Linux

2007-04-16 Thread Joerg Schilling
Paul Fisher [EMAIL PROTECTED] wrote:

 Is there any reason that the CDDL dictates, or that Sun would object,
 to zfs being made available as an independently distributed Linux kernel
 module?  In other words, if I made an Nvidia-like distribution available,
 would that be OK from the OpenSolaris side?

The way I understand the fact/way that Sun did openSource Solaris is that
there are no objections from Sun's side against using OpenSolaris code 
inside other projects.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Erast Benson
Did you measure CPU utilization by any chance during the tests?
Its T2000 and CPU cores are quite slow on this box hence might be a
bottleneck.

just a guess.

On Mon, 2007-04-16 at 13:10 -0400, Tony Galway wrote:
 I had previously undertaken a benchmark that pits “out of box”
 performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
 outstanding availability issues in ZFS. These have been taken care of,
 and I am once again undertaking this challenge on behalf of my
 customer. The idea behind this benchmark is to show
 
  
 
 a.  How ZFS might displace the current commercial volume and file
 system management applications being used.
 
 b. The learning curve of moving from current volume management
 products to ZFS.
 
 c.  Performance differences across the different volume management
 products.
 
  
 
 VDBench is the test bed of choice as this has been accepted by the
 customer as a telling and accurate indicator of performance. The last
 time I attempted this test it had been suggested that VDBench is not
 appropriate to testing ZFS, I cannot see that being a problem, VDBench
 is a tool – if it highlights performance problems, then I would think
 it is a very effective tool so that we might better be able to fix
 those deficiencies.
 
  
 
 Now, to the heart of my problem!
 
  
 
 The test hardware is a T2000 connected to a 12 disk SE3510 (presenting
 as JBOD)  through a brocade switch, and I am using Solaris 10 11/06.
 For Veritas, I am using Storage Foundation Suite 5.0. The systems were
 jumpstarted to the same configuration before testing a different
 volume management software to ensure there were no artifacts remaining
 from any previous test.
 
  
 
 I present my vdbench definition below for your information:
 
  
 
 sd=FS,lun=/pool/TESTFILE,size=10g,threads=8
 
 wd=DWR,sd=FS,rdpct=100,seekpct=80
 
 wd=ETL,sd=FS,rdpct=0,  seekpct=80
 
 wd=OLT,sd=FS,rdpct=70, seekpct=80
 
 rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
  
 
 As you can see, it is fairly straight forward and I take the average
 of the three runs in each of ETL, OLT and DWR workloads. As an aside,
 I am also performing this test for various file system block sizes as
 applicable as well.
 
  
 
 I then ran this workload against a Raid-5 LUN created and mounted in
 each of the different file system types. Please note that one of the
 test criteria is that the associated volume management software create
 the Raid-5 LUN, not the disk subsystem.
 
  
 
 1.  UFS via SVM
 
 # metainit d20 –r d1 … d8 
 
 # newfs /dev/md/dsk/d20
 
 # mount /dev/md/dsk/d20 /pool
 
  
 
 2.  ZFS
 
 # zfs create pool raidz d1 … d8
 
  
 
 3.  VxFS – Veritas SF5.0
 
 # vxdisk init SUN35100_0 ….  SUN35100_7
 
 # vxdg init testdg SUN35100_0  … 
 
 # vxassist –g testdg make pool 418283m layout=raid5
 
  
 
  
 
 Now to my problem – Performance!  Given the test as defined above,
 VxFS absolutely blows the doors off of both UFS and ZFS during write
 operations. For example, during a single test on an 8k file system
 block, I have the following average IO Rates:
 
  
 
  
 
 
ETL
 
 
   OLTP
 
 
DWR
 
 
 UFS
 
 
390.00
 
 
   1298.44
 
 
  23173.60
 
 
 VxFS
 
 
  15323.10
 
 
  27329.04
 
 
  22889.91
 
 
 ZFS
 
 
   2122.23
 
 
   7299.36
 
 
  22940.63
 
 
 
  
 
  
 
 If you look at these numbers percentage wise, with VxFS being set to
 100%, then UFS run’s at 2.5% the speed, and ZFS at 13.8% the speed,
 for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are
 100% reads, no writing, performance is similar with UFS at 101.2% and
 ZFS at 100.2% the speed of VxFS.
 
  
 
   cid:image002.png@01C78027.99B515D0
 
  
 
  
 
 Given this performance problems, then quite obviously VxFS quite
 rightly deserves to be the file system of choice, even with a cost
 premium. If anyone has any insight into why I am seeing, consistently,
 these types of very disappointing numbers I would very much appreciate
 your comments. The numbers are very disturbing as it is indicating
 that write 

Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Frank Cusack

On April 16, 2007 1:10:41 PM -0400 Tony Galway [EMAIL PROTECTED] wrote:

I had previously undertaken a benchmark that pits out of box performance

...

The test hardware is a T2000 connected to a 12 disk SE3510 (presenting as

...

Now to my problem - Performance!  Given the test as defined above, VxFS
absolutely blows the doors off of both UFS and ZFS during write
operations. For example, during a single test on an 8k file system block,
I have the following average IO Rates:


out of the box performance of zfs on T2000 hardware might suffer.

http://blogs.sun.com/realneel/entry/zfs_and_databases is the
only link I could find, but there is another article somewhere
about tuning for t2000, related to PCI on the t2000, ie it is
t2000-specific.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Tony Galway
The volume is 7+1. I have created the volume using both the default (DRL) as
well as 'nolog' to turn it off, both with similar performance. On the advice
of Henk, after he had looked over my data, he is notice that the veritas
test seems to be almost entirely using file system cache. I will retest with
a much larger file to defeat this cache (I do not want to modify my mount
options). If this then shows similar performance (I will also retest with
ZFS with the same file size) then the question will probably have more to do
with how ZFS handles file system caching.

-Tony

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: April 16, 2007 2:16 PM
To: [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

Is the VxVM volume 8-wide?  It is not clear from your creation commands.
  -- richard

Tony Galway wrote:
 
 
 I had previously undertaken a benchmark that pits out of box 
 performance of UFS via SVM, VxFS and ZFS but was waylaid due to some 
 outstanding availability issues in ZFS. These have been taken care of, 
 and I am once again undertaking this challenge on behalf of my customer. 
 The idea behind this benchmark is to show
 
  
 
 a.   How ZFS might displace the current commercial volume and file 
 system management applications being used.
 
 b.  The learning curve of moving from current volume management 
 products to ZFS.
 
 c.   Performance differences across the different volume management 
 products.
 
  
 
 VDBench is the test bed of choice as this has been accepted by the 
 customer as a telling and accurate indicator of performance. The last 
 time I attempted this test it had been suggested that VDBench is not 
 appropriate to testing ZFS, I cannot see that being a problem, VDBench 
 is a tool - if it highlights performance problems, then I would think it 
 is a very effective tool so that we might better be able to fix those 
 deficiencies.
 
  
 
 Now, to the heart of my problem!
 
  
 
 The test hardware is a T2000 connected to a 12 disk SE3510 (presenting 
 as JBOD)  through a brocade switch, and I am using Solaris 10 11/06. For 
 Veritas, I am using Storage Foundation Suite 5.0. The systems were 
 jumpstarted to the same configuration before testing a different volume 
 management software to ensure there were no artifacts remaining from any 
 previous test.
 
  
 
 I present my vdbench definition below for your information:
 
  
 
 sd=FS,lun=/pool/TESTFILE,size=10g,threads=8
 
 wd=DWR,sd=FS,rdpct=100,seekpct=80
 
 wd=ETL,sd=FS,rdpct=0,  seekpct=80
 
 wd=OLT,sd=FS,rdpct=70, seekpct=80
 

rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 
  
 
 As you can see, it is fairly straight forward and I take the average of 
 the three runs in each of ETL, OLT and DWR workloads. As an aside, I am 
 also performing this test for various file system block sizes as 
 applicable as well.
 
  
 
 I then ran this workload against a Raid-5 LUN created and mounted in 
 each of the different file system types. Please note that one of the 
 test criteria is that the associated volume management software create 
 the Raid-5 LUN, not the disk subsystem.
 
  
 
 1.   UFS via SVM
 
 # metainit d20 -r d1 . d8
 
 # newfs /dev/md/dsk/d20
 
 # mount /dev/md/dsk/d20 /pool
 
  
 
 2.   ZFS
 
 # zfs create pool raidz d1 . d8
 
  
 
 3.   VxFS - Veritas SF5.0
 
 # vxdisk init SUN35100_0 ..  SUN35100_7
 
 # vxdg init testdg SUN35100_0  .
 
 # vxassist -g testdg make pool 418283m layout=raid5
 
  
 
  
 
 Now to my problem - Performance!  Given the test as defined above, VxFS 
 absolutely blows the doors off of both UFS and ZFS during write 
 operations. For example, during a single test on an 8k file system 
 block, I have the following average IO Rates:
 
  
 
  
 
   
 
 *ETL *
 
   
 
 *OLTP *
 
   
 
 *DWR *
 
 *UFS *
 
   
 
 390.00
 
   
 
 1298.44
 
   
 
 23173.60
 
 *VxFS *
 
   
 
 15323.10
 
   
 
 27329.04
 
   
 
 22889.91
 
 *ZFS *
 
   
 
 2122.23
 
   
 
 7299.36
 
   
 
 22940.63
 
  
 
  
 
 If you look at these 

[zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-16 Thread William D. Hathaway
Why are you using software-based RAID 5/RAIDZ for the tests?  I didn't think 
this was a common setup in cases where file system performance was the primary 
consideration.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs question as to sizes

2007-04-16 Thread Wade . Stuart





[EMAIL PROTECTED] wrote on 04/16/2007 04:57:43 PM:

 one pool is mirror on 300gb dirives and the other is raidz1 on 7 x
 143gb drives.

 I did make clone of my zfs file systems with their snaps and something is
not
 right, sizes do not match... anyway here is what I have:

 [17:50:32] [EMAIL PROTECTED]: /root  zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 mypool 272G  1.95G  24.5K  /mypool
 mypool/d   272G  1.95G   143G  /d/d2
 mypool/[EMAIL PROTECTED]  3.72G  -   123G  -
 mypool/[EMAIL PROTECTED]  22.3G  -   156G  -
 mypool/[EMAIL PROTECTED]  23.3G  -   161G  -
 mypool/[EMAIL PROTECTED]  16.1G  -   172G  -
 mypool/[EMAIL PROTECTED]  13.8G  -   168G  -
 mypool/[EMAIL PROTECTED] 15.7G  -   168G  -
 mypool/[EMAIL PROTECTED]192M  -   143G  -
 mypool/[EMAIL PROTECTED]189M  -   143G  -
 mypool/[EMAIL PROTECTED]200M  -   143G  -
 mypool/[EMAIL PROTECTED]  3.93M  -   143G  -
 mypool2463G   474G52K  /mypool2
 mypool2/d  318G   474G   168G  /mypool2/d
 mypool2/[EMAIL PROTECTED]  4.40G  -   145G  -
 mypool2/[EMAIL PROTECTED]  26.1G  -   184G  -
 mypool2/[EMAIL PROTECTED]  27.3G  -   189G  -
 mypool2/[EMAIL PROTECTED]  18.7G  -   202G  -
 mypool2/[EMAIL PROTECTED]  16.1G  -   197G  -
 mypool2/[EMAIL PROTECTED]18.2G  -   198G  -
 mypool2/d3 145G   474G   145G  legacy

 see:
 mypool/d   272G  1.95G   143G  /d/d2
 mypool2/d  318G   474G   168G  /mypool2/d

 they are the same copies but their sizes do differ quite a bit,
 original is 272G
 and the clone which I duplicated by zfs send/receive is 318G. Then all
the
 other snaps also do differ. Why is that difference? Could someone explain
how
 does it work?


No,  snapshot space usage reporting is beyond brain-dead at this point,
shared data between snaps is hidden from the reporting and there is no way
(short of deleting snapshots) to see how much space they are holding
hostage.  The ZFS team has said that they are working on providing more
detail -- I have not seen anything yet. Why it was considered a valid data
column in its current state is anyone's guess -- in one of my servers I
have over 7 tb unaccounted for in the zfs listing because of this issue.

-Wade


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Krzys
[18:19:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] | 
zfs receive -F mypool2/[EMAIL PROTECTED]

invalid option 'F'
usage:
receive [-vn] filesystem|volume|snapshot
receive [-vn] -d filesystem

For the property list, run: zfs set|get

It does not seem to work unless I am doing it incorectly.

Chris

On Tue, 17 Apr 2007, Nicholas Lee wrote:


On 4/17/07, Krzys [EMAIL PROTECTED] wrote:



and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been modified since most recent snapshot

is there any way to do such replication by zfs send/receive and avoind
such
error message? Is there any way to force file system not to be mounted? Is
there
any way to make it maybe read only parition and then when its needed maybe
make
it live or whaverer?



Check the -F option to zfs receive. This automatically rolls back the
target.
Nicholas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, Multiple Machines and NFS

2007-04-16 Thread Adrian Thompson


Hi!

I am very new to ZFS (never installed it), and I have a small question.

Is it possible with ZFS to merge multiple machines with NFS into one ZFS 
filesystem so they look like one storage device?


As I'm typing this I feel like a fool, but I'll ask anyway. :-)

Thanks!

-=//-\drian Thompson=-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs question as to sizes

2007-04-16 Thread Eric Schrock
On Mon, Apr 16, 2007 at 05:13:37PM -0500, [EMAIL PROTECTED] wrote:
 
 Why it was considered a valid data column in its current state is
 anyone's guess.
 

This column is precise and valid.  It represents the amount of space
uniquely referenced by the snapshot, and therefore the amount of space
that would be freed were it to be deleted.

The shared space between snapshots, besides being difficult to
calculate, is nearly impossible to enumerate in anything beyond the most
trivial setups.  For example, with just snapshots 'a b c d e', you can
have space shared by the following combinations:

a b
a b c
a b c d
a b c d e
b c
b c d
b c d e
c d
c d e
d e

Not to mention the space shared with the active filesystem.  With dozens
of snapshots, you're talking about hundreds or thousands of
combinations.  It's certainly possible to calculate the space used
various snapshot intersections, but presenting it is another matter.
Perhaps you could describe how you would like this information to be
presented in zfs(1M).

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Cindy . Swearingen

Chris,

Looks like you're not running a Solaris release that contains
the zfs receive -F option. This option is in current Solaris community
release, build 48.

http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup

Otherwise, you'll have to wait until an upcoming Solaris 10 release.

Cindy

Krzys wrote:
[18:19:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED] 
mypool/[EMAIL PROTECTED] | zfs receive -F mypool2/[EMAIL PROTECTED]

invalid option 'F'
usage:
 receive [-vn] filesystem|volume|snapshot
 receive [-vn] -d filesystem

For the property list, run: zfs set|get

It does not seem to work unless I am doing it incorectly.

Chris

On Tue, 17 Apr 2007, Nicholas Lee wrote:


On 4/17/07, Krzys [EMAIL PROTECTED] wrote:




and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been modified since most recent snapshot

is there any way to do such replication by zfs send/receive and avoind
such
error message? Is there any way to force file system not to be 
mounted? Is

there
any way to make it maybe read only parition and then when its needed 
maybe

make
it live or whaverer?




Check the -F option to zfs receive. This automatically rolls back the
target.
Nicholas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] crashed remote system trying to do zfs send / receive

2007-04-16 Thread Robert Milkowski
Hello Krzys,

Sunday, April 15, 2007, 4:53:43 AM, you wrote:

K Strange thing, I did try to do zfs send/receive using zfs.

K On the from host I did the following:


K bash-3.00# zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs 
receive
K mypool/zones/[EMAIL PROTECTED]
K Password:
K ^CKilled by signal 2.

K 1 or 2 minutes later I did break this command and I wanted to time it so I 
did
K change command and reissued it.

K bash-3.00# time zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs
K receive mypool/zones/[EMAIL PROTECTED]
K Password:
K ^CKilled by signal 2.


K real0m7.346s
K user0m0.220s
K sys 0m0.036s
K bash-3.00#

K Right after this I got on remote server kernel panic and here is the output:

K [22:35:30] @zglobix1: /root 
K panic[cpu1]/thread=30001334380: dangling dbufs (dn=6000a13eba0, 
K dbuf=60007d927e8)


K 02a1004f9030 zfs:dnode_evict_dbufs+19c (6000a13eba0, 1, 7b64e800, 
K 6000a9e34b0, 1, 2a1004f90e8)
K%l0-3: 06000a13edb0   06000a13edb8
K%l4-7: 02a1004f90e8 060007be3910 0003 0001
K 02a1004f9230 zfs:dmu_objset_evict_dbufs+d8 (21, 0, 0, 7b648400, 
6000a9e32c0,
K 6000a9e32c0)
K%l0-3: 060002738f89 000f 060007fef8f0 06000a9e34a0
K%l4-7: 06000a13eba0 06000a9e3398 0001 7b6485e7
K 02a1004f92e0 zfs:dmu_objset_evict+b4 (60007138900, 6000a9e32c0, 180e580,
K 7b60a800, 7b64e400, 7b64e400)
K%l0-3:  0180c000 0005 
K%l4-7: 060007b9f600 7b60a800 7b64e400 
K 02a1004f93a0 zfs:dsl_dataset_evict+34 (60007138900, 7b60af7c, 18364c0,
K 60001ac90c0, 6000a9e32c0, 60007b9f600)
K%l0-3:   02a10001fcc0 02a10001fcc0
K%l4-7: 030b1b80  01ac8d1a 018a8400
K 02a1004f9450 zfs:dbuf_evict_user+48 (60007138908, 60007b9f600, 
60008666cd0,
K 0, 0, 60008666be8)
K%l0-3:  060007138900 0013 
K%l4-7: 03000107c000 018ade70 0bc0 7b612fa4
K 02a1004f9500 zfs:zfsctl_ops_root+b184ac4 (60008666c40, 60008666be8,
K 70478000, 3, 3, 0)
K%l0-3: 060001ac90c0 000f 0600071389a0 
K%l4-7:   0001 70478018
K 02a1004f95b0 zfs:dmu_recvbackup+8e8 (60006f32d00, 60006f32fd8, 
60006f32e30,
K 1, 60006ad5fa8, 0)
K%l0-3: 060006f32d15 060007138900 7b607c00 7b648000
K%l4-7: 0040 0354 0001 0138
K 02a1004f9780 zfs:zfs_ioc_recvbackup+38 (60006f32000, 0, 0, 0, 9, 0)
K%l0-3: 0004  006d 
K%l4-7:  060006f3200c  0073
K 02a1004f9830 zfs:zfsdev_ioctl+160 (70478c00, 5d, ffbfebc8, 1f, 7c, 1000)
K%l0-3: 060006f32000   007c
K%l4-7: 7b63b688 70479248 02e8 70478f60
K 02a1004f98e0 genunix:fop_ioctl+20 (60006c8fd00, 5a1f, ffbfebc8, 13,
K 600067d34b0, 12066d4)
K%l0-3: 0600064da200 0600064da200 0003 060006cc6fd8
K%l4-7: ff342036 ff345c7c  018a9400
K 02a1004f9990 genunix:ioctl+184 (3, 60006c03688, ffbfebc8, 6500, ff00,
K 5a1f)
K%l0-3:   0004 c1ac
K%l4-7: 0001   

K syncing file systems... 2 1 done
K dumping to /dev/dsk/c1t1d0s1, offset 3436642304, content: kernel
K   94% done
K SC Alert: Failed to send email alert to the primary mailserver.

K SC Alert: Failed to send email alert for recent event.
K 100% done: 71483 pages dumped, compression ratio 5.34, dump succeeded
K rebooting...

K SC Alert: Host System has Reset
K Probing system devices
K Probing memory
K Probing I/O buses

K Sun Fire V240, No Keyboard
K Copyright 1998-2004 Sun Microsystems, Inc.  All rights reserved.
K OpenBoot 4.16.2, 16384 MB memory installed, Serial #63395381.
K Ethernet address 0:3:ba:c7:56:35, Host ID: 83c75635.



K Initializing  2048MB of memory at addr10 \
K SC Alert: Failed to send email alert for recent event.
K Rebooting with command: boot
K Boot device: disk1  File and args:
K SunOS Release 5.10 Version Generic_118833-36 64-bit
K Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
K Use is subject to license terms.
K /
K SC Alert: Failed to send email alert for recent event.
K Hardware watchdog enabled
K Hostname: zglobix1
K checking ufs filesystems
K /dev/rdsk/c1t1d0s7: is logging.
K Failed to send email alert for recent event.

K SC Alert: Failed to send email alert for recent event.

K zglobix1 console login:




K Any 

Re: [zfs-discuss] ZFS, Multiple Machines and NFS

2007-04-16 Thread Rayson Ho

Adrian, you can take a look at pNFS:

http://opensolaris.org/os/community/os_user_groups/frosug/pNFS/FROSUG-pNFS.pdf

Project homepage:
http://opensolaris.org/os/project/nfsv41/

Rayson



On 4/16/07, Jason A. Hoffman [EMAIL PROTECTED] wrote:

On Apr 16, 2007, at 3:24 PM, Adrian Thompson wrote:


 Hi!

 I am very new to ZFS (never installed it), and I have a small
 question.

 Is it possible with ZFS to merge multiple machines with NFS into
 one ZFS filesystem so they look like one storage device?

 As I'm typing this I feel like a fool, but I'll ask anyway. :-)

 Thanks!

 -=//-\drian Thompson=-


Not a foolish question at all, but the answer is no. ZFS works with
block devices.

Regards, Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Krzys
Ah, ok, not a problem, do you know Cindy when next Solaris Update is going to be 
released by SUN? Yes, I am running U3 at this moment.


Regards,

Chris

On Mon, 16 Apr 2007, [EMAIL PROTECTED] wrote:


Chris,

Looks like you're not running a Solaris release that contains
the zfs receive -F option. This option is in current Solaris community
release, build 48.

http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup

Otherwise, you'll have to wait until an upcoming Solaris 10 release.

Cindy

Krzys wrote:
[18:19:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED] 
mypool/[EMAIL PROTECTED] | zfs receive -F mypool2/[EMAIL PROTECTED]

invalid option 'F'
usage:
 receive [-vn] filesystem|volume|snapshot
 receive [-vn] -d filesystem

For the property list, run: zfs set|get

It does not seem to work unless I am doing it incorectly.

Chris

On Tue, 17 Apr 2007, Nicholas Lee wrote:


On 4/17/07, Krzys [EMAIL PROTECTED] wrote:




and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been modified since most recent snapshot

is there any way to do such replication by zfs send/receive and avoind
such
error message? Is there any way to force file system not to be mounted? 
Is

there
any way to make it maybe read only parition and then when its needed 
maybe

make
it live or whaverer?




Check the -F option to zfs receive. This automatically rolls back the
target.
Nicholas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,4623fa8a1809423226276!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] crashed remote system trying to do zfs send / receive

2007-04-16 Thread Krzys

Ah, perfect then... Thank you so much for letting me know...

Regards,

Chris


On Tue, 17 Apr 2007, Robert Milkowski wrote:


Hello Krzys,

Sunday, April 15, 2007, 4:53:43 AM, you wrote:

K Strange thing, I did try to do zfs send/receive using zfs.

K On the from host I did the following:


K bash-3.00# zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs 
receive
K mypool/zones/[EMAIL PROTECTED]
K Password:
K ^CKilled by signal 2.

K 1 or 2 minutes later I did break this command and I wanted to time it so I 
did
K change command and reissued it.

K bash-3.00# time zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs
K receive mypool/zones/[EMAIL PROTECTED]
K Password:
K ^CKilled by signal 2.


K real0m7.346s
K user0m0.220s
K sys 0m0.036s
K bash-3.00#

K Right after this I got on remote server kernel panic and here is the output:

K [22:35:30] @zglobix1: /root 
K panic[cpu1]/thread=30001334380: dangling dbufs (dn=6000a13eba0,
K dbuf=60007d927e8)


K 02a1004f9030 zfs:dnode_evict_dbufs+19c (6000a13eba0, 1, 7b64e800,
K 6000a9e34b0, 1, 2a1004f90e8)
K%l0-3: 06000a13edb0   06000a13edb8
K%l4-7: 02a1004f90e8 060007be3910 0003 0001
K 02a1004f9230 zfs:dmu_objset_evict_dbufs+d8 (21, 0, 0, 7b648400, 
6000a9e32c0,
K 6000a9e32c0)
K%l0-3: 060002738f89 000f 060007fef8f0 06000a9e34a0
K%l4-7: 06000a13eba0 06000a9e3398 0001 7b6485e7
K 02a1004f92e0 zfs:dmu_objset_evict+b4 (60007138900, 6000a9e32c0, 180e580,
K 7b60a800, 7b64e400, 7b64e400)
K%l0-3:  0180c000 0005 
K%l4-7: 060007b9f600 7b60a800 7b64e400 
K 02a1004f93a0 zfs:dsl_dataset_evict+34 (60007138900, 7b60af7c, 18364c0,
K 60001ac90c0, 6000a9e32c0, 60007b9f600)
K%l0-3:   02a10001fcc0 02a10001fcc0
K%l4-7: 030b1b80  01ac8d1a 018a8400
K 02a1004f9450 zfs:dbuf_evict_user+48 (60007138908, 60007b9f600, 
60008666cd0,
K 0, 0, 60008666be8)
K%l0-3:  060007138900 0013 
K%l4-7: 03000107c000 018ade70 0bc0 7b612fa4
K 02a1004f9500 zfs:zfsctl_ops_root+b184ac4 (60008666c40, 60008666be8,
K 70478000, 3, 3, 0)
K%l0-3: 060001ac90c0 000f 0600071389a0 
K%l4-7:   0001 70478018
K 02a1004f95b0 zfs:dmu_recvbackup+8e8 (60006f32d00, 60006f32fd8, 
60006f32e30,
K 1, 60006ad5fa8, 0)
K%l0-3: 060006f32d15 060007138900 7b607c00 7b648000
K%l4-7: 0040 0354 0001 0138
K 02a1004f9780 zfs:zfs_ioc_recvbackup+38 (60006f32000, 0, 0, 0, 9, 0)
K%l0-3: 0004  006d 
K%l4-7:  060006f3200c  0073
K 02a1004f9830 zfs:zfsdev_ioctl+160 (70478c00, 5d, ffbfebc8, 1f, 7c, 1000)
K%l0-3: 060006f32000   007c
K%l4-7: 7b63b688 70479248 02e8 70478f60
K 02a1004f98e0 genunix:fop_ioctl+20 (60006c8fd00, 5a1f, ffbfebc8, 13,
K 600067d34b0, 12066d4)
K%l0-3: 0600064da200 0600064da200 0003 060006cc6fd8
K%l4-7: ff342036 ff345c7c  018a9400
K 02a1004f9990 genunix:ioctl+184 (3, 60006c03688, ffbfebc8, 6500, ff00,
K 5a1f)
K%l0-3:   0004 c1ac
K%l4-7: 0001   

K syncing file systems... 2 1 done
K dumping to /dev/dsk/c1t1d0s1, offset 3436642304, content: kernel
K   94% done
K SC Alert: Failed to send email alert to the primary mailserver.

K SC Alert: Failed to send email alert for recent event.
K 100% done: 71483 pages dumped, compression ratio 5.34, dump succeeded
K rebooting...

K SC Alert: Host System has Reset
K Probing system devices
K Probing memory
K Probing I/O buses

K Sun Fire V240, No Keyboard
K Copyright 1998-2004 Sun Microsystems, Inc.  All rights reserved.
K OpenBoot 4.16.2, 16384 MB memory installed, Serial #63395381.
K Ethernet address 0:3:ba:c7:56:35, Host ID: 83c75635.



K Initializing  2048MB of memory at addr10 \
K SC Alert: Failed to send email alert for recent event.
K Rebooting with command: boot
K Boot device: disk1  File and args:
K SunOS Release 5.10 Version Generic_118833-36 64-bit
K Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
K Use is subject to license terms.
K /
K SC Alert: Failed to send email alert for recent event.
K Hardware watchdog enabled
K Hostname: zglobix1
K checking ufs filesystems
K /dev/rdsk/c1t1d0s7: is logging.
K Failed to 

Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Shawn Walker

On 16/04/07, Krzys [EMAIL PROTECTED] wrote:

Ah, ok, not a problem, do you know Cindy when next Solaris Update is going to be
released by SUN? Yes, I am running U3 at this moment.


Summer is what I last read (July?).
--
Less is only more where more is no good. --Frank Lloyd Wright

Shawn Walker, Software and Systems Analyst
[EMAIL PROTECTED] - http://binarycrusader.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to bind the oracle 9i data file to zfs volumes

2007-04-16 Thread Manoj Joseph

Simon wrote:

So,does mean this is oracle bug ? Or it's impossible(or inappropriate)
to use ZFS/SVM volumes to create oracle data file,instead,should use
zfs or ufs filesystem to do this.


Oracle can use SVM volumes to hold its data. Unless I am mistaken, it 
should be able to use zvols as well.


However, googling for 'zvol + Oracle' did not get me anything useful. 
Perhaps it is not a configuration that is very popular. ;)


My $ 0.02.

-Manoj
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshot features

2007-04-16 Thread Torrey McMahon

Frank Cusack wrote:
On April 16, 2007 10:24:04 AM +0200 Selim Daoud 
[EMAIL PROTECTED] wrote:

hi all ,

when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach snapshot
functionalities that are offered in other products suchs as Compellent
(http://www.compellent.com/products/software/continuous_snapshots.aspx)

Compellent software allows to set **retention periods** for different
snapshots and will manage their migration or deletion automatically


retention period is pretty easily managed via cron 


Yeah but cron isn't easily managed by anything. :-P
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Torrey McMahon

Tony Galway wrote:


I had previously undertaken a benchmark that pits “out of box” 
performance of UFS via SVM, VxFS and ZFS but was waylaid due to some 
outstanding availability issues in ZFS. These have been taken care of, 
and I am once again undertaking this challenge on behalf of my 
customer. The idea behind this benchmark is to show


a. How ZFS might displace the current commercial volume and file 
system management applications being used.


b. The learning curve of moving from current volume management 
products to ZFS.


c. Performance differences across the different volume management 
products.


VDBench is the test bed of choice as this has been accepted by the 
customer as a telling and accurate indicator of performance. The last 
time I attempted this test it had been suggested that VDBench is not 
appropriate to testing ZFS, I cannot see that being a problem, VDBench 
is a tool – if it highlights performance problems, then I would think 
it is a very effective tool so that we might better be able to fix 
those deficiencies.




First, VDBench is a Sun internal and partner only tool so you might not 
get much response on this list.
Second, VDBench is great for testing raw block i/o devices. I think a 
tool that does file system testing will get you better data.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to bind the oracle 9i data file to zfs volumes

2007-04-16 Thread Richard Elling

Manoj Joseph wrote:

Simon wrote:

So,does mean this is oracle bug ? Or it's impossible(or inappropriate)
to use ZFS/SVM volumes to create oracle data file,instead,should use
zfs or ufs filesystem to do this.


Oracle can use SVM volumes to hold its data. Unless I am mistaken, it 
should be able to use zvols as well.


Yes.  Though I believe most people will prefer regular file systems or
ASM.  We discuss performance implications on the ZFS wiki at
http://solarisinternals.com

However, googling for 'zvol + Oracle' did not get me anything useful. 
Perhaps it is not a configuration that is very popular. ;)


Sounds like an opportunity... please share your experiences.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss