Re: [zfs-discuss] ZFS, XFS, and EXT4 compared

2007-08-29 Thread mike
On 8/29/07, Jeffrey W. Baker <[EMAIL PROTECTED]> wrote:
> I have a lot of people whispering "zfs" in my virtual ear these days,
> and at the same time I have an irrational attachment to xfs based
> entirely on its lack of the 32000 subdirectory limit.  I'm not afraid of
> ext4's newness, since really a lot of that stuff has been in Lustre for
> years.  So a-benchmarking I went.  Results at the bottom:
>
> http://tastic.brillig.org/~jwb/zfs-xfs-ext4.html
>
> Short version: ext4 is awesome.  zfs has absurdly fast metadata
> operations but falls apart on sequential transfer.  xfs has great
> sequential transfer but really bad metadata ops, like 3 minutes to tar
> up the kernel.
>
> It would be nice if mke2fs would copy xfs's code for optimal layout on a
> software raid.  The mkfs defaults and the mdadm defaults interact badly.

this is cool to see. however, performance wouldn't be my reason for
moving to zfs. the inline checksumming and all that is what i want. if
someone could get nearly incorruptable filesystems (or just a linux
version of zfs... btrfs looks promising) this would be even better.

sadly ext4+swraid isn't as good, i might have tried that since waiting
the right hardware support for zfs for me seems to be unknown at this
point.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, XFS, and EXT4 compared

2007-08-29 Thread Cyril Plisko
Jeffrey,

it would be interesting to see your zpool layout info as well.
It can significantly influence the results obtained in the benchmarks.



On 8/30/07, Jeffrey W. Baker <[EMAIL PROTECTED]> wrote:
> I have a lot of people whispering "zfs" in my virtual ear these days,
> and at the same time I have an irrational attachment to xfs based
> entirely on its lack of the 32000 subdirectory limit.  I'm not afraid of
> ext4's newness, since really a lot of that stuff has been in Lustre for
> years.  So a-benchmarking I went.  Results at the bottom:
>
> http://tastic.brillig.org/~jwb/zfs-xfs-ext4.html
>
> Short version: ext4 is awesome.  zfs has absurdly fast metadata
> operations but falls apart on sequential transfer.  xfs has great
> sequential transfer but really bad metadata ops, like 3 minutes to tar
> up the kernel.
>
> It would be nice if mke2fs would copy xfs's code for optimal layout on a
> software raid.  The mkfs defaults and the mdadm defaults interact badly.
>
> Postmark is somewhat bogus benchmark with some obvious quantization
> problems.
>
> Regards,
> jwb
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, XFS, and EXT4 compared

2007-08-29 Thread Jeffrey W. Baker
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit.  I'm not afraid of
ext4's newness, since really a lot of that stuff has been in Lustre for
years.  So a-benchmarking I went.  Results at the bottom:

http://tastic.brillig.org/~jwb/zfs-xfs-ext4.html

Short version: ext4 is awesome.  zfs has absurdly fast metadata
operations but falls apart on sequential transfer.  xfs has great
sequential transfer but really bad metadata ops, like 3 minutes to tar
up the kernel.

It would be nice if mke2fs would copy xfs's code for optimal layout on a
software raid.  The mkfs defaults and the mdadm defaults interact badly.

Postmark is somewhat bogus benchmark with some obvious quantization
problems.

Regards,
jwb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New zfs pr0n server :)))

2007-08-29 Thread michael
do either of you know the current story about this card? i can't get it to work 
at all in solaris 10, but i'm very new to the OS.


thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to incorporate disk size tolerance into

2007-08-29 Thread Richard Elling
MC wrote:
>> This is a problem for replacement, not creation.
> 
> You're talking about solving the problem in the future?  I'm talking about 
> working around the problem today.  :)  This isn't a fluffy dream problem.  I 
> ran into this last month when an RMA'd drive wouldn't fit back into a RAID5 
> array.  RAIDZ is subject to the exact same problem, so I want to find the 
> solution before making a RAIDZ array.
> 
>> The authoritative answer is in the man page for zpool.
> 
> You quoted the exact same line that I quoted in my original post.  That isn't 
> a solution.  That is a constraint which causes the problem and which the 
> solution must work around.
> 
> The two solutions listed here are slicing and hacking the "whole disk" label 
> to be smaller than the whole disk.  There is no consensus here on what 
> solution, if any, should be used.  I would like there to be, so I'll leave 
> the original question open.

slicing seems simple enough.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to incorporate disk size tolerance into

2007-08-29 Thread MC
> This is a problem for replacement, not creation.

You're talking about solving the problem in the future?  I'm talking about 
working around the problem today.  :)  This isn't a fluffy dream problem.  I 
ran into this last month when an RMA'd drive wouldn't fit back into a RAID5 
array.  RAIDZ is subject to the exact same problem, so I want to find the 
solution before making a RAIDZ array.

> The authoritative answer is in the man page for zpool.

You quoted the exact same line that I quoted in my original post.  That isn't a 
solution.  That is a constraint which causes the problem and which the solution 
must work around.

The two solutions listed here are slicing and hacking the "whole disk" label to 
be smaller than the whole disk.  There is no consensus here on what solution, 
if any, should be used.  I would like there to be, so I'll leave the original 
question open.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there _any_ suitable motherboard?

2007-08-29 Thread MP
Has anyone considered the Gigabyte GA-G33-DS3R which has a G33 chipset (P35 
with builtin video). It has that builtin VGA and the most on board, well 
supported, SATA ports I could find:
8xSATA; 6xSATA provided by the ICH9 and 2xSATA on JMB363. The latter must be 
supported by OpenSolaris? It's been out a while now at least and is well 
supported by other OSes.
One problem is a Realtek NIC which may not be supported by OSolaris?
All the hardware is supported by FreeBSD 7.0 if that's of any use.
Having builtin VGA it's a very low power board.
Couple that with one of the very cheap Core2Duo E2140 and you have a good base 
for a ZFS RAIDZ system.
Cheers.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Correct procedure to remove a ZFS file system from a non-global zone

2007-08-29 Thread Ril
I followed the procedure to add a ZFS dataset to a non-global, whole-root zone 
as a delegated dataset as described in the ZFS Admin Guide, page 129.  What is 
the proper way to remove it?  I tried the following:
>From the global zone:
1) halt the zone
2) use zonecfg to remove the dataset
3) boot the zone
When I logged back into the zone, the mount point was still present, but now 
under /, but the files were gone, and they weren't present in the pool on the 
global zone, either.

Is this the correct procedure to remove a ZFS file system from a non-global 
zone?  Is there anyway to do this without losing the data in the file system?

Thanks!

Ril
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool ids

2007-08-29 Thread Eric Schrock
On Wed, Aug 29, 2007 at 11:40:34PM +0530, Balu manyam wrote:
> Thanks ,Eric - I am looking for a way to import a zpool which is a bit by
> bit copy of the zpool(hence also the meta data) which is already imported on
> the same host ..that is two pools one imported and one not are presented to
> the same host
> 
> Do you have any suggestions for this?

There is currently no way to do this.  It has come up before, but its
tricky.  By definition, a GUID is supposed to be a globally unique
identifier for a pool.  It would be pretty simple to write a utility
that given a device, changed its guid to another random value.  See
zpool_read_label() for how the label is read in userland.  It would not
be hard to write out an updated packed nvlist.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool ids

2007-08-29 Thread Bill Sommerfeld
On Wed, 2007-08-29 at 09:41 -0700, Eric Schrock wrote:
> Note that 'fstyp -v' does the same thing as 'zdb -l', and
> is marginally more stable.  The output it still technically subject to
> change, but it's highly unlikely (given the pain such a change would
> cause).

If other programs depend on aspects of the output of fstyp -v or fstyp
-a output on a ZFS vdev, we should probably boost the stability level of
specific fields in the output and clarify what will remain subject to
change.  For instance we could say that the pool_guid, hostname, and
hostid won't go away or change meaning, but that other fields may appear
or disappear, and that fields may be reordered at any time.

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool ids

2007-08-29 Thread Eric Schrock
On Wed, Aug 29, 2007 at 10:29:50PM +0530, Balu manyam wrote:
> Thanks!,Eric and Darren  -- 'zdb -l ' was indeed what I was
> looking for ..
> 
> Also, Is there an easy way to change this ID manually - This would be
> extremely useful  in a SAN environment.

No, there is no way to change it.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool ids

2007-08-29 Thread Eric Schrock
On Wed, Aug 29, 2007 at 05:39:19PM +0100, Darren J Moffat wrote:
> Eric Schrock wrote:
> > You can use 'zdb -l ' where 'dev' is a device in the pool, and then
> > look for the 'pool_guid' line (note that this is "not an interface" and
> > could theoretically change).  An upcoming putback will add the guid as a
> > first-class pool property, so that it can be used in 'zpool list' or
> > 'zpool get'.
> 
> Will 'zpool get -o guid/name' work even on an exported pool which is I 
> believe the original question.

No, the only way to do that is with 'zdb -l'.  I was just mentioning it
as an orthogonal FYI, since it won't help with device -> pool
translations.  Note that 'fstyp -v' does the same thing as 'zdb -l', and
is marginally more stable.  The output it still technically subject to
change, but it's highly unlikely (given the pain such a change would
cause).

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool ids

2007-08-29 Thread Darren J Moffat
Eric Schrock wrote:
> You can use 'zdb -l ' where 'dev' is a device in the pool, and then
> look for the 'pool_guid' line (note that this is "not an interface" and
> could theoretically change).  An upcoming putback will add the guid as a
> first-class pool property, so that it can be used in 'zpool list' or
> 'zpool get'.

Will 'zpool get -o guid/name' work even on an exported pool which is I 
believe the original question.

> On Wed, Aug 29, 2007 at 09:02:50AM -0700, Manyam wrote:
>> Hi folks -- 
>>  What's the best way to get the id associated with a zpool name(without 
>> importing it if it is already not done so)  --  that is - given the disk 
>> device name - I would to love get the ID of the zpool of which this disk is 
>> a part of.


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool ids

2007-08-29 Thread Eric Schrock
You can use 'zdb -l ' where 'dev' is a device in the pool, and then
look for the 'pool_guid' line (note that this is "not an interface" and
could theoretically change).  An upcoming putback will add the guid as a
first-class pool property, so that it can be used in 'zpool list' or
'zpool get'.

- Eric

On Wed, Aug 29, 2007 at 09:02:50AM -0700, Manyam wrote:
> Hi folks -- 
>  What's the best way to get the id associated with a zpool name(without 
> importing it if it is already not done so)  --  that is - given the disk 
> device name - I would to love get the ID of the zpool of which this disk is a 
> part of.
> 
> Thanks!
> 
> --Balu
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] import zfs dataset online in a zone

2007-08-29 Thread Darren Dunham
> i have a similar problem - i am trying to add a zfs volume to a non-global 
> zone without rebooting it. it is not practical to shutdown the application 
> just to add one more raw device.
> is there a way to manually create the device files in /dev/zvol/... in the 
> zone and make it aware of the change ?

I haven't ever tried to do that, but it might work.  You might be able
to just do the mknod in the global zone, but have it create the device
in the non-global zone's filesystem tree.  Use the same numbers as the
global zone's device.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool ids

2007-08-29 Thread Manyam
Hi folks -- 
 What's the best way to get the id associated with a zpool name(without 
importing it if it is already not done so)  --  that is - given the disk device 
name - I would to love get the ID of the zpool of which this disk is a part of.

Thanks!

--Balu
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to incorporate disk size tolerance into

2007-08-29 Thread Richard Elling
MC wrote:
> Thanks for the comprehensive replies!
> 
> I'll need some baby speak on this one though: 
> 
>> The recommended use of whole disks is for drives with volatile write 
>> caches where ZFS will enable the cache if it owns the whole disk. There 
>> may be an RFE lurking here, but it might be tricky to correctly implement 
>> to protect against future data corruptions by non-ZFS use.
> 
> I don't know what you mean by "drives with volatile write caches", but I'm 
> dealing with commodity SATA2 drives from WD/Seagate/Hitachi/Samsung.  

You may see it in the data sheet as "buffer" or "cache buffer" for such drives.
Usually 8-16 MBytes with 32 MBytes for newer drives.

> This disk replacement thing is a pretty common use case, so I think it would 
> be smart to sort it out while someone cares, and then stick the authoritative
> answer into the zfs wiki.  This is what I can contribute without knowing the 
> answer:

The authoritative answer is in the man page for zpool.
System Administration Commands  zpool(1M)

 The size of new_device must be greater than or equal  to
 the minimum size of all the devices in a mirror or raidz
 configuration.

> The best way to incorporate abnormal disk size variance tolerance into a 
> raidz array 
> is BLANK, and it has these BLANK side effects.  

This is a problem for replacement, not creation.  For creation, the problem 
becomes
more generic, but can make use of automation.  I've got some algorithms to do 
that,
but am not quite ready with a generic solution which is administrator friendly. 
 In
other words, the science isn't difficult, the automation is.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2007-08-29 Thread Joerg Schilling
sean walmsley <[EMAIL PROTECTED]> wrote:

> We mostly rely on AMANDA, but for a simple, compressed, encrypted, 
> tape-spanning alternative backup (intended for disaster recovery) we use:
>
> tar cf -  | lzf (quick compression utility) | ssl (to encrypt) | 
> mbuffer (which writes to tape and looks after tape changes)
>
> Recovery is exactly the opposite, i.e:
>
> mbuffer | ssl | lzf | tar xf -
>
> The mbuffer utility (http://www.maier-komor.de/mbuffer.html) has the ability 
> to call a script to change tapes, so if you have the mtx utility you're in 
> business. Mbuffer's other great advantage is that it buffers reads and/or 
> writes, so you can make sure that your tape is writing in decent sized chunks 
> (i.e. isn't shoe-shining) even if your system can't keep up with your shiny 
> new LTO-4 drive :-)

Star includes the features to buffer and to change tapes since nearyl 20 years.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-29 Thread Joerg Schilling
[EMAIL PROTECTED] wrote:

>
> >AFAIK, a read-only UFS mount will unroll the log and thus write to th=
> >e medium.
>
>
> It does not (that's what code inspection suggests).
>
> It will update the in-memory image with the log entries but the
> log will not be rolled.

Why then does fsck mount the fs read-only before starting the fsck task?

I thought this was in order to unroll the log first.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-29 Thread Casper . Dik

>AFAIK, a read-only UFS mount will unroll the log and thus write to th=
>e medium.


It does not (that's what code inspection suggests).

It will update the in-memory image with the log entries but the
log will not be rolled.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-29 Thread Joerg Schilling
[EMAIL PROTECTED] wrote:

>
> >> It's worse than this.  Consider the read-only clients.  When you
> >> access a filesystem object (file, directory, etc.), UFS will write
> >> metadata to update atime.  I believe that there is a noatime option to
> >> mount, but I am unsure as to whether this is sufficient.
> >
> >Is this some particular build or version that does this?  I can't find a
> >version of UFS that updates atimes (or anything else) when mounted
> >read-only.
>
> No that is clearly not the case; read-only mounts never write.

AFAIK, a read-only UFS mount will unroll the log and thus write to the medium.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Bad Blocks Handling

2007-08-29 Thread Joerg Schilling
Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:

> On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote:
> > Hi,
> > 
> > Does ZFS flag blocks as bad so it knows to avoid using them in the future?
>
> No it doesn't. This would be a really nice feature to have, but
> currently when ZFS tries to write to a bad sector it simply tries few
> times and gives up. With COW model this shouldn't be very hard to try to
> use another block and mark this one as bad, but it's not yet
> implemented.

Bad block handling was needed before 1985, when the hardware did not support
to map bad block. Even at that time, it was done in the disk dricer and not in
the filesystem (except for FAT).

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs raidz1+hopspare on X4500 ,can I ?

2007-08-29 Thread zhanjinwei
I have read you papers for several times ,they are so helpful!

the difference between raidz2 and [raidz1 + 1 hot spare] is :
If there are 2 disk in a pool fail at the same time ,it's a  disaster for the 
later configuration ,and the raidz2 pool is so stable to avoid it.
But,if the 2 disk fail at different time ,the tow configration are all 
available after replace the bad disks.

Does the idea right?
 
thanks®ards
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] import zfs dataset online in a zone

2007-08-29 Thread Stoyan Angelov
hello all,

i have a similar problem - i am trying to add a zfs volume to a non-global zone 
without rebooting it. it is not practical to shutdown the application just to 
add one more raw device.
is there a way to manually create the device files in /dev/zvol/... in the zone 
and make it aware of the change ?

greetings,

Stoyan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS quota

2007-08-29 Thread Robert Milkowski
Hello Brad,

Monday, August 27, 2007, 3:47:47 PM, you wrote:

>> OK, you asked for "creative" workarounds... here's one (though it requires 
>> that the filesystem be briefly unmounted, which may be deal-killing):

BP> That is, indeed, creative.  :)   And yes, the unmount make it 
BP> impractical in my environment.  

BP> I ended up going back to rsync, because we had more and more
BP> complaints as the snapshots accumulated, but am now just rsyncing to
BP> another system, which in turn runs snapshots on the backup copy.  It's
BP> still time- and i/o-consuming, and the users can't recover their own
BP> files, but at least I'm not eating up 200% of the space otherwise
BP> necessary on the expensive new hardware raid and fielding daily 
BP> over-quota (when not really over-quota) complaints. 

BP> Thanks for the suggestion.  Looking forward to the new feature... 

Instead of rsync you could try to send incrementals using zfs send.
If you have a lot of files it should be much quicker (issuing less #
of IO).

-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss