Re: [zfs-discuss] Re: ZFS + rsync, backup on steroids.

2006-08-31 Thread Robert Milkowski
Hello Richard,

Thursday, August 31, 2006, 8:17:41 AM, you wrote:

RLH> Are both of you doing a umount/mount (or export/import, I guess) of the
RLH> source filesystem before both first and second test?  Otherwise, there 
might
RLH> still be a fair bit of cached data left over from the first test, which 
would
RLH> give the 2nd an unfair advantage.  I'm fairly sure unmounting a filesystem
RLH> invalidates all cached pages associated with files on that filesystem, as 
well
RLH> as any cached [iv]node entries, all of which in needed to ensure both tests
RLH> are starting from the most similar situation possible.  Ideally, all this 
would
RLH> even be done in single-user mode, so that nothing else could interfere.

IIRC unmounting ZFS file system won't flush its caches - you've got to
export entire pool.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Newbie questions about drive problems

2006-08-31 Thread Baptiste Augrain
Hi,

I'm a newbie at ZFS but I have some questions: 

I have 3 drives.
The first one will be the primary/boot drive under UFS. The 2 others will 
become a mirrored pool with ZFS.
Now, I have problem with the boot drive (hardware or software), so all the data 
on my mirrored pool are ok?
How can I restore this pool? When I create the pool, do I need to save the 
properties?


What happend when a drive crash when ZFS write some data on a raidz pool?
Do the pool go to the degraded state or faulted state?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Storage Compatibilty list

2006-08-31 Thread san2rini

Hi all,
I am about to try ZFS on my 420 with (of course) Solaris 10 6/06 
installed. My question is :
is there any  storage compatibility list where to find which storage (in 
my case a D1000) goes or not with ZFS?


cheers



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie questions about drive problems

2006-08-31 Thread Constantin Gonzalez
Hi,

> I have 3 drives.
> The first one will be the primary/boot drive under UFS. The 2 others will 
> become a mirrored pool with ZFS.
> Now, I have problem with the boot drive (hardware or software), so all the 
> data on my mirrored pool are ok?
> How can I restore this pool? When I create the pool, do I need to save the 
> properties?

All metadata for the pool is stored inside the pool. If the boot disk fails in
any way, all pool data is safe.

Worst case might be that you have to reinstall everything on the boot disk.
After that, you just say "zfs import" to get your pool back and everything
will be ok.

> What happend when a drive crash when ZFS write some data on a raidz pool?

If the crash occurs in the middle of a write operation, then the new data
blocks will not be valid. ZFS will then revert back to the state before
writing the new set of blocks. Therefore you'll have 100% data integrity
but of course the new blocks that were written to the pool will be lost.

> Do the pool go to the degraded state or faulted state?

No, the pool will come up as online. The degraded state is only for devices
that aren't accessible any more and the faulted state is for pools that do
not have enough valid devices to be complete.

Hope this helps,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Client Solutionshttp://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage Compatibilty list

2006-08-31 Thread James Dickens

On 8/31/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi all,
I am about to try ZFS on my 420 with (of course) Solaris 10 6/06
installed. My question is :
is there any  storage compatibility list where to find which storage (in
my case a D1000) goes or not with ZFS?


If solaris see's the device as a readable and writeable block device
or a slice on the device, be it a  scsi, ide, pata, usb flash drive,
lofi mounted text file and it is larger than 128MB of diskspace it can
be part of a zfs pool.

James Dickens
uadmin.blogspot.com



cheers



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage Compatibilty list

2006-08-31 Thread James C. McPherson

James Dickens wrote:

On 8/31/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi all,
I am about to try ZFS on my 420 with (of course) Solaris 10 6/06
installed. My question is :
is there any  storage compatibility list where to find which storage (in
my case a D1000) goes or not with ZFS?


If solaris see's the device as a readable and writeable block device
or a slice on the device, be it a  scsi, ide, pata, usb flash drive,
lofi mounted text file and it is larger than 128MB of diskspace it can
be part of a zfs pool.


San2rini, the only thing I'd add to JamesD's comment is that since
you've got a d1000, if you have two scsi channels in your e420 you
should put the d1000 in split-bus mode and use zfs to do the mirroring.

Of course, you're still going to have a SPOF in the d1000, but if you
only want to get a feel for zfs then it's a good way to demonstrate to
yourself just how reliable zfs is :)



cheers,
James C. McPherson
(on a permanent search for more disk space )
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Find the difference between two snapshots

2006-08-31 Thread Niclas Sodergard

Hi everyone,

Is there an easy way to find out which files has changed between two
snapshots? Currently I'm  doing a

# rsync -arvn  

and it creates a list. But rsync needs to go through the whole
filesystem and compare files. It would be nice if zfs would have this
option builtin.

Regards,
Nickus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs and vmware

2006-08-31 Thread Brian Hechinger
On Wed, Aug 30, 2006 at 10:53:20PM -0700, Stefan Johansson wrote:
> Yes I did and it works ok enough for me.
> 
> Would be nice to have vmware server for Solaris instead so I can run Solaris 
> as the host and use zfs directly on the controllers.

You're not the only one who wants that.  Maybe we should petition
VMWare to do that. :)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Find the difference between two snapshots

2006-08-31 Thread Tim Foster
Hi Nickus,

On Thu, 2006-08-31 at 15:44 +0300, Niclas Sodergard wrote:
> Is there an easy way to find out which files has changed between two
> snapshots? Currently I'm  doing a
> 
> # rsync -arvn  
> 
> and it creates a list. But rsync needs to go through the whole
> filesystem and compare files. It would be nice if zfs would have this
> option builtin.

Nope, unfortunately not - you're interested in bug 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6425091

 - it'd be pretty handy for the zfs desktop integration stuff I was
playing with too...

cheers,
tim
-- 
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Find the difference between two snapshots

2006-08-31 Thread Niclas Sodergard

On 8/31/06, Tim Foster <[EMAIL PROTECTED]> wrote:

Hi Nickus,

On Thu, 2006-08-31 at 15:44 +0300, Niclas Sodergard wrote:
> Is there an easy way to find out which files has changed between two
> snapshots? Currently I'm  doing a
>
> # rsync -arvn  
>
> and it creates a list. But rsync needs to go through the whole
> filesystem and compare files. It would be nice if zfs would have this
> option builtin.

Nope, unfortunately not - you're interested in bug
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6425091

 - it'd be pretty handy for the zfs desktop integration stuff I was
playing with too...


Thanks for the quick answer. It looks like the problem is much more
complex than I first thought.

cheers,
Nickus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage Compatibilty list

2006-08-31 Thread san2rini

Thanks guys for your help.

Of course I have already put the D1000 in split so I can test the ZFS 
mirroring and so on


cheers

Alfredo


James C. McPherson wrote:

James Dickens wrote:

On 8/31/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi all,
I am about to try ZFS on my 420 with (of course) Solaris 10 6/06
installed. My question is :
is there any  storage compatibility list where to find which storage 
(in

my case a D1000) goes or not with ZFS?


If solaris see's the device as a readable and writeable block device
or a slice on the device, be it a  scsi, ide, pata, usb flash drive,
lofi mounted text file and it is larger than 128MB of diskspace it can
be part of a zfs pool.


San2rini, the only thing I'd add to JamesD's comment is that since
you've got a d1000, if you have two scsi channels in your e420 you
should put the d1000 in split-bus mode and use zfs to do the mirroring.

Of course, you're still going to have a SPOF in the d1000, but if you
only want to get a feel for zfs then it's a good way to demonstrate to
yourself just how reliable zfs is :)



cheers,
James C. McPherson
(on a permanent search for more disk space )




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with expanding LUNs

2006-08-31 Thread Theo Bongers
Please can anyone tell me how to handle with a LUN that is expanded (on a RAID 
array or SAN storage)? and grow the filesystem without data-loss?
How does ZFS looks at the volume. In other words how can I grow the filesystem 
after LUN expansion.
Do I need to format/type/autoconfigre/label on the specific device?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] File level compression

2006-08-31 Thread Manoj Joseph

Robert Milkowski wrote:

Hello Sanjeev,

Wednesday, August 30, 2006, 3:26:52 PM, you wrote:

SB> Hi,

SB> We were trying out the "compression=on" feature of ZFS and were 
SB> wondering if it would make
SB> sense to have ZFS do compression only on a certain kind of files (or 
SB> rather the otherway around).


SB> Our observation :
SB> - If ZFS finds that it cannot achieve a certain amount of compression it
SB> does not compress the file.
SB>   However, to figure this out ZFS would have to first compress the 
SB> block's data. This means ZFS ends

SB>   up consuming resources and that overhead is not worth it.
SB> Is the above observation correct. If so, is there someway this can be
SB> tuned/controlled ?

right now it's 12% minimum compression gain required.
Unfortunately last I checked it was hard-coded.


RFE 6444911 'zfs lzjb compression needs to be tunable' has been filed 
for this.


Regards,
Manoj
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] File level compression

2006-08-31 Thread Sanjeev Bagewadi

Manoj Joseph wrote:

Robert Milkowski wrote:

Hello Sanjeev,

Wednesday, August 30, 2006, 3:26:52 PM, you wrote:

SB> Hi,

SB> We were trying out the "compression=on" feature of ZFS and were 
SB> wondering if it would make
SB> sense to have ZFS do compression only on a certain kind of files 
(or SB> rather the otherway around).


SB> Our observation :
SB> - If ZFS finds that it cannot achieve a certain amount of 
compression it

SB> does not compress the file.
SB> However, to figure this out ZFS would have to first compress the 
SB> block's data. This means ZFS ends

SB> up consuming resources and that overhead is not worth it.
SB> Is the above observation correct. If so, is there someway this 
can be

SB> tuned/controlled ?

right now it's 12% minimum compression gain required.
Unfortunately last I checked it was hard-coded.


RFE 6444911 'zfs lzjb compression needs to be tunable' has been filed 
for this.
Thanks ! But, I don't think it completely addresses the issue I 
highlighted above.
Probably another RFE could be filed for this. Or add these requirements 
to the current

RFE.

Regards,
Sanjeev.


Regards,
Manoj


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Find the difference between two snapshots

2006-08-31 Thread Nicolas Williams
On Thu, Aug 31, 2006 at 02:55:27PM +0100, Tim Foster wrote:
> On Thu, 2006-08-31 at 15:44 +0300, Niclas Sodergard wrote:
> > Is there an easy way to find out which files has changed between two
> > snapshots? Currently I'm  doing a
> 
> Nope, unfortunately not - you're interested in bug 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6425091

See also:

6370738 zfs diffs filesystems

The idea is that SEEK_HOLE[*] could be used to find block-level diffs
for individual files that have changed.  [*] Or SEEK_DIFF, to avoid
aliasing holes and diffs.

Dealing with renames/links/unlinks is... harder.  The idea is that a
diffs filesystem would only show directories and files that have changed
or are part of a path where some directory/file has changed, and all
unchanged things would be hidden.  You'd still have to do a fair bit of
work at the app level to deal with renames/links/unlinks generally.

Partly what is difficult here is that ZFS tracks differences between
snapshots/filesystems at the block level, so you can find out that some
dnode changed, but mapping that dnode to its name(s) in the filesystem,
if it can have hardlinks (i.e., it's not a directory) is as hard as it's
ever been to map inode #s back to paths.

It'd be nice if there was a background dnode->paths indexer that could
asynchronously maintain such an index (synchronously at snapshot time,
so a snapshot could capture this index as it should be).  Asynchronous
so as not to slow down meta-data operations unnecessarily.  If that's
too hard, then make it synchronous but let running it be optional
per-filesystem.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Newbie questions about drive problems

2006-08-31 Thread Baptiste Augrain
Thanks

Now, I don't have any worry to migrate to ZFS.

> > I have 3 drives.
> > The first one will be the primary/boot drive under
> UFS. The 2 others will become a mirrored pool with
> ZFS.
> > Now, I have problem with the boot drive (hardware
> or software), so all the data on my mirrored pool are
> ok?
> > How can I restore this pool? When I create the
> pool, do I need to save the properties?
> 
> All metadata for the pool is stored inside the pool.
> If the boot disk fails in
> any way, all pool data is safe.
> 
> Worst case might be that you have to reinstall
> everything on the boot disk.
> After that, you just say "zfs import" to get your
> pool back and everything
> will be ok.
> 
> > What happend when a drive crash when ZFS write some
> data on a raidz pool?
> 
> If the crash occurs in the middle of a write
> operation, then the new data
> blocks will not be valid. ZFS will then revert back
> to the state before
> writing the new set of blocks. Therefore you'll have
> 100% data integrity
> but of course the new blocks that were written to the
> pool will be lost.
> 
> > Do the pool go to the degraded state or faulted
> state?
> 
> No, the pool will come up as online. The degraded
> state is only for devices
> that aren't accessible any more and the faulted state
> is for pools that do
> not have enough valid devices to be complete.
> 
> Hope this helps,
>Constantin
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with expanding LUNs

2006-08-31 Thread Matthew Ahrens

Theo Bongers wrote:

Please can anyone tell me how to handle with a LUN that is expanded (on a RAID 
array or SAN storage)? and grow the filesystem without data-loss?
How does ZFS looks at the volume. In other words how can I grow the filesystem 
after LUN expansion.
Do I need to format/type/autoconfigre/label on the specific device?


I believe that if you have given ZFS the whole disk, then it will 
automatically detect that the LUN has grown when it opens the device. 
You can cause this to happen by rebooting the machine, or running 'zpool 
export ; zpool import '.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS + rsync, backup on steroids.

2006-08-31 Thread Matthew Ahrens

Robert Milkowski wrote:

Hello Richard,

Thursday, August 31, 2006, 8:17:41 AM, you wrote:

RLH> Are both of you doing a umount/mount (or export/import, I guess) of the
RLH> source filesystem before both first and second test?  Otherwise, there 
might
RLH> still be a fair bit of cached data left over from the first test, which 
would
RLH> give the 2nd an unfair advantage.  I'm fairly sure unmounting a filesystem
RLH> invalidates all cached pages associated with files on that filesystem, as 
well
RLH> as any cached [iv]node entries, all of which in needed to ensure both tests
RLH> are starting from the most similar situation possible.  Ideally, all this 
would
RLH> even be done in single-user mode, so that nothing else could interfere.

IIRC unmounting ZFS file system won't flush its caches - you've got to
export entire pool.


That's correct.  And I did ensure that the data was not cached before 
each of my tests.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with expanding LUNs

2006-08-31 Thread Eric Schrock
On Thu, Aug 31, 2006 at 09:54:25AM -0700, Matthew Ahrens wrote:
> Theo Bongers wrote:
> >Please can anyone tell me how to handle with a LUN that is expanded (on a 
> >RAID array or SAN storage)? and grow the filesystem without data-loss?
> >How does ZFS looks at the volume. In other words how can I grow the 
> >filesystem after LUN expansion.
> >Do I need to format/type/autoconfigre/label on the specific device?
> 
> I believe that if you have given ZFS the whole disk, then it will 
> automatically detect that the LUN has grown when it opens the device. 
> You can cause this to happen by rebooting the machine, or running 'zpool 
> export ; zpool import '.

I think a 'zpool online ' should also work, since it triggers a
vdev_reopen().  But this requires that the underlying driver correctly
detects the LUN expansion and reflects this in the ldi_get_size() call.
I'm not sure if all drivers properly handle this case.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS + rsync, backup on steroids.

2006-08-31 Thread Roch

Matthew Ahrens writes:
 > Robert Milkowski wrote:
 > > Hello Richard,
 > > 
 > > Thursday, August 31, 2006, 8:17:41 AM, you wrote:
 > > 
 > > RLH> Are both of you doing a umount/mount (or export/import, I guess) of 
 > > the
 > > RLH> source filesystem before both first and second test?  Otherwise, 
 > > there might
 > > RLH> still be a fair bit of cached data left over from the first test, 
 > > which would
 > > RLH> give the 2nd an unfair advantage.  I'm fairly sure unmounting a 
 > > filesystem
 > > RLH> invalidates all cached pages associated with files on that 
 > > filesystem, as well
 > > RLH> as any cached [iv]node entries, all of which in needed to ensure both 
 > > tests
 > > RLH> are starting from the most similar situation possible.  Ideally, all 
 > > this would
 > > RLH> even be done in single-user mode, so that nothing else could 
 > > interfere.
 > > 
 > > IIRC unmounting ZFS file system won't flush its caches - you've got to
 > > export entire pool.
 > 
 > That's correct.  And I did ensure that the data was not cached before 
 > each of my tests.
 > 
 > --matt
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Matt  ?

It seems to me  that (at  least  in the past) unmount  would
actually cause   the data to  not be  accessible (read would
issue an I/O) even if potentially the associated memory with
previous cached data was not quite reaped back to the OS.

I'm currently going on

umount to clear the cache.
export to free up the memory.

Does this sound correct ?

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS + rsync, backup on steroids.

2006-08-31 Thread Matthew Ahrens

Roch wrote:

Matthew Ahrens writes:
 > Robert Milkowski wrote:
 > > IIRC unmounting ZFS file system won't flush its caches - you've got to
 > > export entire pool.
 > 
 > That's correct.  And I did ensure that the data was not cached before 
 > each of my tests.


Matt  ?

It seems to me  that (at  least  in the past) unmount  would
actually cause   the data to  not be  accessible (read would
issue an I/O) even if potentially the associated memory with
previous cached data was not quite reaped back to the OS.


Looks like you're right, we do (mostly) evict the data when a filesystem 
is unmounted.  The exception is if some of its cached data is being 
shared with another filesystem (eg, via a clone fs), then that data will 
not be evicted.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating data across boxes

2006-08-31 Thread Matthew Ahrens

John Beck wrote:

% zfs snapshot -r [EMAIL PROTECTED]
% zfs send space/[EMAIL PROTECTED] | ssh newbox zfs recv -d space
% zfs send space/[EMAIL PROTECTED] | ssh newbox zfs recv -d space

...

% zfs set mountpoint=/export/home space
% zfs set mountpoint=/usr/local space/local
% zfs set sharenfs=on space/jbeck space/local


I'm working on some enhancements to zfs send/recv that will simplify 
this even further, especially in cases where you have many filesystems, 
snapshots, or changed properties.  In particular, you'll be able to 
simply do:


# zfs snapshot -r [EMAIL PROTECTED]
# zfs send -r -b [EMAIL PROTECTED] | ssh newbox zfs recv -p -d newpool

The "send -b" flag means to send from the beginning.  This will send a 
full stream of the oldest snapshot, and incrementals up to the named 
snapshot (eg, from @a to @b, from @b to @c, ... @j to @today).  This way 
your new pool will have all of the snapshots from your old pool.


The "send -r" flag means to do this for all the filesystem's descendants 
as well (in this case, space/jbeck and space/local).


The "recv -p" flag means to preserve locally set properties (in this 
case, the mountpoint and sharenfs settings).


For more information, see RFEs 6421959 and 6421958, and watch for a 
forthcoming formal interface proposal.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS with expanding LUNs

2006-08-31 Thread Theo Bongers
Ok, it's clear but doesn't give me any real solid ground. Maybe I tell more 
about the hardware etc. It's a Sun X4200 running Solaris 10 X68 connecting 
through fiberchannel to a StorageTek D280 RAID storage.

Maybe this makes the picture more complete.

Greets,
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] File level compression

2006-08-31 Thread Torrey McMahon

Sanjeev Bagewadi wrote:

Manoj Joseph wrote:

Robert Milkowski wrote:

Hello Sanjeev,

Wednesday, August 30, 2006, 3:26:52 PM, you wrote:

SB> Hi,

SB> We were trying out the "compression=on" feature of ZFS and were 
SB> wondering if it would make
SB> sense to have ZFS do compression only on a certain kind of files 
(or SB> rather the otherway around).


SB> Our observation :
SB> - If ZFS finds that it cannot achieve a certain amount of 
compression it

SB> does not compress the file.
SB> However, to figure this out ZFS would have to first compress the 
SB> block's data. This means ZFS ends

SB> up consuming resources and that overhead is not worth it.
SB> Is the above observation correct. If so, is there someway this 
can be

SB> tuned/controlled ?

right now it's 12% minimum compression gain required.
Unfortunately last I checked it was hard-coded.


RFE 6444911 'zfs lzjb compression needs to be tunable' has been filed 
for this.
Thanks ! But, I don't think it completely addresses the issue I 
highlighted above.
Probably another RFE could be filed for this. Or add these 
requirements to the current

RFE.



I'd say its a new RFE. You'd probably want whitelist and blacklist 
functionality too.


--
Torrey McMahon
Sun Microsystems Inc.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] libzfs question

2006-08-31 Thread SRIKANTH KONERU
Hi there...
I have solaris 5.10 update 2
I am trying get some iostats on a zpool, A little bit digging indicated thtat 
I can do this by doing 
libzfs_init and zpool_open and so on .. library calls.

But /lib/libzfs.so doesn't have the implementation of libzfs_int() , checked 
with
/ucs/ccs/bin/nm utility.

another issue with zpool_open, zpool_open  emits
file "cannot open '': invalid pool name"  set errno to ENOENT,
even though I have provided the correc zpool name.

Please let me know if there are any pointers how to use libzfs, like man pages 
or
documentation.

Thx
SRIKANTH KONERU
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] libzfs question

2006-08-31 Thread Eric Schrock
No, there is no documentation for libzfs.  Your only bet is to look at
the opensolaris source and see how its being used.  It's a
consolidation-private library, so it's not intended for public use.
That being said, if you're willing to go through some growing pains on
upgrade/patch, you're welcome to use it.

For Solaris 10 updates, you're going to run into problems because the
interfaces are different between the two solaris versions.  Since the
S10 source isn't open, it's hard to tell what interface changes have
been backported or not.

For versions of libzfs with libzfs_init(), you need to do (checking for
NULL in each case):

libzfs_handle_t *hdl = libzfs_init();
zpool_handle_t *zhp = zpool_open(hdl, "poolname");
nvlist_t *stats = zpool_get_stats(zhp);

You'll have to pick apart the nvlist contents on your own.  For versions
of libzfs prior to libzfs_init(), you do:

zpool_handle_t *zhp = zpool_open("poolname");
nvlist_t *stats = zpool_get_stats(zhp);

Hope that helps.  For figuring this stuff out, the source is your
friend ;-)

- Eric

On Thu, Aug 31, 2006 at 12:07:35PM -0700, SRIKANTH KONERU wrote:
> Hi there...
> I have solaris 5.10 update 2
> I am trying get some iostats on a zpool, A little bit digging indicated thtat 
> I can do this by doing 
> libzfs_init and zpool_open and so on .. library calls.
> 
> But /lib/libzfs.so doesn't have the implementation of libzfs_int() , checked 
> with
> /ucs/ccs/bin/nm utility.
> 
> another issue with zpool_open, zpool_open  emits
> file "cannot open '': invalid pool name"  set errno to ENOENT,
> even though I have provided the correc zpool name.
> 
> Please let me know if there are any pointers how to use libzfs, like man 
> pages or
> documentation.
> 
> Thx
> SRIKANTH KONERU
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] can't create snapshot

2006-08-31 Thread Frank Cusack

[EMAIL PROTECTED]:~]# zfs snapshot export/zone/smb/share/[EMAIL PROTECTED]
internal error: unexpected error 16 at line 2302 of ../common/libzfs_dataset.c

I don't have this problem with other filesystems.  There was one existing
snapshot which, after getting the above error, I deleted successfully.  But
I still can't take a new snapshot.

The filesystem is not mounted.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't create snapshot

2006-08-31 Thread Frank Cusack

On August 31, 2006 4:00:52 PM -0700 Frank Cusack <[EMAIL PROTECTED]> wrote:

[EMAIL PROTECTED]:~]# zfs snapshot export/zone/smb/share/[EMAIL PROTECTED]
internal error: unexpected error 16 at line 2302 of ../common/libzfs_dataset.c

I don't have this problem with other filesystems.  There was one existing
snapshot which, after getting the above error, I deleted successfully.  But
I still can't take a new snapshot.

The filesystem is not mounted.


This was on a zoned filesystem.  I booted the single zone mounting this fs
(which then mounted the fs) and was able to take the snapshot.  After halting
the zone, I am still able to take snapshots.  Note that this same zone mounts
other filesystems, on which I was able to take snapshots without any problems.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't create snapshot

2006-08-31 Thread Matthew Ahrens

Frank Cusack wrote:

[EMAIL PROTECTED]:~]# zfs snapshot export/zone/smb/share/[EMAIL PROTECTED]
internal error: unexpected error 16 at line 2302 of 
../common/libzfs_dataset.c


I don't have this problem with other filesystems.  There was one existing
snapshot which, after getting the above error, I deleted successfully.  But
I still can't take a new snapshot.

The filesystem is not mounted.


Hmm, I wonder if this is related to 6462803 "zfs snapshot -r failed 
because filesystem was busy"?  If you mount the filesystem (thus causing 
any latent intent log to be played), can you take the snapshot?


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss