Re: [zfs-discuss] [RFC] Improved versioned pointer algorithms

2008-07-20 Thread Daniel Phillips
On Monday 14 July 2008 08:29, Akhilesh Mritunjai wrote:
 Writable snapshots are called clones in zfs. So infact, you have
 trees of snapshots and clones. Snapshots are read-only, and you can
 create any number of writable clones from a snapshot, that behave
 like a normal filesystem and you can again take snapshots of the
 clones. 

So if I snapshot a filesystem, then clone it, then delete a file
from both the clone and the original filesystem, the presence
of the snapshot will prevent the file blocks from being recovered,
and there is no way I can get rid of those blocks short of deleting
both the clone and the snapshot.  Did I get that right?

Regards,

Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2008-07-20 Thread Rob Clark
 Robert Milkowski wrote:
 During christmass I managed to add my own compression to zfs - it as quite 
 easy. 

Great to see innovation but unless your personal compression method is somehow 
better (very fast with excellent 
compression) then would it not be a better idea to use an existing (leading 
edge) compression method ?

7-Zip's (http://www.7-zip.org/) 'newest' methods are LZMA and PPMD 
(http://www.7-zip.org/7z.html). 

There is a proprietary license for LZMA that _might_ interest Sun but PPMD is 
no explicit license see this link:

Using PPMD for compression
http://www.codeproject.com/KB/recipes/ppmd.aspx

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-20 Thread Rob Clark
 I got overzealous with snapshot creation. Every 5 mins is a bad idea. Way too 
 many.
 What's the easiest way to delete the empty ones?
 zfs list takes FOREVER

You might enjoy reading:

ZFS snapshot massacre
http://blogs.sun.com/chrisg/entry/zfs_snapshot_massacre.

(Yes, the . is part of the URL (NMF) - so add it or you'll 404).

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] install opensolaris on raidz

2008-07-20 Thread Miles Nordin
 r == Ross  [EMAIL PROTECTED] writes:

 r the benefit of mirroring that CF drive would be minimal.
 
rather short-sighted.  What if you want to replace the CF with a
bigger or faster one without shutting down?


pgpSx47yLusSx.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Rob Clark
 -Peter Tribble wrote:

 On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
 I have eight 10GB drives.
 ...
 I have 6 remaining 10 GB drives and I desire to
 raid 3 of them and mirror them to the other 3 to
 give me raid security and integrity with mirrored
 drive performance. I then want to move my /export
 directory to the new drive.
 ...

 You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0
 -Peter Tribble


Solaris may not allow me to do that but the concept is not unheard of:


Quoting: 
Proceedings of the Third USENIX Conference on File and Storage Technologies
http://www.usenix.org/publications/library/proceedings/fast04/tech/corbett/corbett.pdf

Mirrored RAID-4 and RAID-5 protect against higher order failures [4]. However, 
the efficiency of the array as measured by its data capacity divided by its 
total disk space is reduced.

[4] Qin Xin, E. Miller, T. Schwarz, D. Long, S. Brandt, W. Litwin, ”Reliability 
mechanisms for very large storage systems”, 20th IEEE/11th NASA Boddard 
Conference on Mass Storage Systems and Technologies, San Diego, CA, pgs. 
146-156, Apr. 2003.

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-20 Thread Chris Gerhard
Also http://blogs.sun.com/chrisg/entry/a_faster_zfs_snapshot_massacre which I 
run every night.  Lots of snapshots are not a bad thing it is keeping them for 
a long time that takes space.  I'm still snapping every 10 minutes and it is 
great. 

The thing I discovered was that I really wanted to be able to find distinct 
verstons fo a file so that I could see which one was the version I wanted to 
get back. To that end I wrote 
http://blogs.sun.com/chrisg/entry/zfs_versions_of_a_file and filed this RFE to 
help with this: 

http://bugs.opensolaris.org/view_bug.do?bug_id=6719101
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] copying a ZFS

2008-07-20 Thread James Mauro
Is there an optimal method of making a complete copy of a ZFS, aside from the 
conventional methods (tar, cpio)?

We have an existing ZFS that was not created with the optimal recordsize.
We wish to create a new ZFS with the optimal recordsize (8k), and copy
all the data from the existing ZFS to the new ZFS.

Obviously, we know how to do this using conventional utilities and commands.

Is there a ZFS-specific method for doing that beats the heck of out tar, etc?
(RTFM indicates there is not; I R'd the FM :^).

This may or may not be a copy to the same zpool, and I'd also be interested in
knowing of that makes a difference (I do not think it does)?

Thanks,
/jim
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copying a ZFS

2008-07-20 Thread Mattias Pantzare
2008/7/20 James Mauro [EMAIL PROTECTED]:
 Is there an optimal method of making a complete copy of a ZFS, aside from the 
 conventional methods (tar, cpio)?

 We have an existing ZFS that was not created with the optimal recordsize.
 We wish to create a new ZFS with the optimal recordsize (8k), and copy
 all the data from the existing ZFS to the new ZFS.

 Obviously, we know how to do this using conventional utilities and commands.

 Is there a ZFS-specific method for doing that beats the heck of out tar, etc?
 (RTFM indicates there is not; I R'd the FM :^).

Use zfs send | zfs receive if you wish to keep your snapshots or if
you will be doing the copy several times. You can send just the
changes between two snapshots.

(zfs send is in the FM :-)


 This may or may not be a copy to the same zpool, and I'd also be interested in
 knowing of that makes a difference (I do not think it does)?

It does not.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] install opensolaris on raidz

2008-07-20 Thread Bob Friesenhahn
On Sun, 20 Jul 2008, Miles Nordin wrote:

 r == Ross  [EMAIL PROTECTED] writes:

 r the benefit of mirroring that CF drive would be minimal.

 rather short-sighted.  What if you want to replace the CF with a
 bigger or faster one without shutting down?

Assuming that you are using zfs root, you just snapshot the filesystem 
and send it to some other system where you build the replacement CF 
card.  Of course the bit of data which changes before the CF card is 
replaced will be lost unless you take special care.  A shutdown is 
required in order to replace the card.  Presuming that the card is 
easily reached, a tech should be able to swap it out in a few minutes.

Regardless, I can't imagine any reason why you would want to install a 
larger or faster card.  Ideally the card should be just big enough to 
serve the purpose since larger cards will be less reliable.  The 
boot/root filesystems should be fairly static.  The only time you 
should notice card performance is when the system is booting.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Richard Elling
Rob Clark wrote:
 -Peter Tribble wrote:
 

   
 On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
 I have eight 10GB drives.
 ...
 I have 6 remaining 10 GB drives and I desire to
 raid 3 of them and mirror them to the other 3 to
 give me raid security and integrity with mirrored
 drive performance. I then want to move my /export
 directory to the new drive.
 ...
   

   
 You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0
 -Peter Tribble
 


 Solaris may not allow me to do that but the concept is not unheard of:
   

Solaris will allow you to do this, but you'll need to use SVM instead
of ZFS.  Or, I suppose, you could use SVM for RAID-5 and ZFS to
mirror those.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copying a ZFS

2008-07-20 Thread Bob Friesenhahn
On Sun, 20 Jul 2008, Mattias Pantzare wrote:

 Is there a ZFS-specific method for doing that beats the heck of out tar, etc?
 (RTFM indicates there is not; I R'd the FM :^).

 Use zfs send | zfs receive if you wish to keep your snapshots or if
 you will be doing the copy several times. You can send just the
 changes between two snapshots.

The problem is that 'zfs send' likely preserves the existing block 
size even if the target pool uses a different block size since it 
operates at a low level which intends to preserve the original zfs 
blocks.

I would use 'find . -depth -print | cpio -pdum destdir' to do the 
copy.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copying a ZFS

2008-07-20 Thread Jim Mauro
So I'm really exposing my ignorance here, but...

You wrote /... if you wish to keep your snapshots.../...
I never mentioned snapshots, thus you
introduced the use of a ZFS snapshot as a method of doing what
I wish to do. And yes, snapshots and send are in the manual, and
I read about them.

I intially (and perhaps incorrectly) rejected the use of snapshots for
my purposes since a snapshot is, by definition, a read-only copy
of the file system. What I need to do is copy the file system in it's
entirety, so I can mount the new file system read/write for online,
production use. Perhaps I should have been clearer about that.

I will investigate using ZFS snapshots with ZFS send as a method
for accomplishing my task. I'm not convinced it's the best way
to acheive my goal, but if it's not, I'd like to make sure I understand
why not.

Thanks for your interest.
/jim


Mattias Pantzare wrote:
 2008/7/20 James Mauro [EMAIL PROTECTED]:
   
 Is there an optimal method of making a complete copy of a ZFS, aside from 
 the conventional methods (tar, cpio)?

 We have an existing ZFS that was not created with the optimal recordsize.
 We wish to create a new ZFS with the optimal recordsize (8k), and copy
 all the data from the existing ZFS to the new ZFS.

 Obviously, we know how to do this using conventional utilities and commands.

 Is there a ZFS-specific method for doing that beats the heck of out tar, etc?
 (RTFM indicates there is not; I R'd the FM :^).
 

 Use zfs send | zfs receive if you wish to keep your snapshots or if
 you will be doing the copy several times. You can send just the
 changes between two snapshots.

 (zfs send is in the FM :-)

   
 This may or may not be a copy to the same zpool, and I'd also be interested 
 in
 knowing of that makes a difference (I do not think it does)?
 

 It does not.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-20 Thread Bill Sommerfeld
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
  I ran a scrub on a root pool after upgrading to snv_94, and got checksum 
  errors:
 
 Hmm, after reading this, I started a zpool scrub on my mirrored pool, 
 on a system that is running post snv_94 bits:  It also found checksum errors
 
 # zpool status files
   pool: files
  state: DEGRADED
 status: One or more devices has experienced an unrecoverable error.  An
   attempt was made to correct the error.  Applications are unaffected.
 action: Determine if the device needs to be replaced, and clear the errors
   using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
  scrub: scrub completed after 0h46m with 9 errors on Fri Jul 18 13:33:56 2008
 config:
 
   NAME  STATE READ WRITE CKSUM
   files DEGRADED 0 018
 mirror  DEGRADED 0 018
   c8t0d0s6  DEGRADED 0 036  too many errors
   c9t0d0s6  DEGRADED 0 036  too many errors
 
 errors: No known data errors

out of curiosity, is this a root pool?  

A second system of mine with a mirrored root pool (and an additional
large multi-raidz pool) shows the same symptoms on the mirrored root
pool only.

once is accident.  twice is coincidence.  three times is enemy
action :-)

I'll file a bug as soon as I can (I'm travelling at the moment with
spotty connectivity), citing my and your reports.

- Bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-20 Thread dick hoogendijk
On Sun, 20 Jul 2008 11:26:16 -0700
Bill Sommerfeld [EMAIL PROTECTED] wrote:

 once is accident.  twice is coincidence.  three times is enemy
 action :-)

I have no access to b94 yet, but as it is, it probably is better to
skip this one when it comes out then.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv91 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Formatting Problem of ZFS Adm Guide (pdf)

2008-07-20 Thread W. Wayne Liauh
 ZFS Administration Guide (in PDF format) does not
 look very professional (at least on
 Evince/OS2008.05).  Please see attached screenshot.

Looks like this is a display problem.  It seems that certain fonts (monospace 
fonts) were not displayed by the version of Evince included in OS 2008.05.  
Please ignore this thread.  I am re-posting it in the Indiana forum.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copying a ZFS

2008-07-20 Thread James C. McPherson
Jim Mauro wrote:
 So I'm really exposing my ignorance here, but...
 
 You wrote /... if you wish to keep your snapshots.../...
 I never mentioned snapshots, thus you
 introduced the use of a ZFS snapshot as a method of doing what
 I wish to do. And yes, snapshots and send are in the manual, and
 I read about them.
 
 I intially (and perhaps incorrectly) rejected the use of snapshots for
 my purposes since a snapshot is, by definition, a read-only copy
 of the file system. What I need to do is copy the file system in it's
 entirety, so I can mount the new file system read/write for online,
 production use. Perhaps I should have been clearer about that.
 
 I will investigate using ZFS snapshots with ZFS send as a method
 for accomplishing my task. I'm not convinced it's the best way
 to acheive my goal, but if it's not, I'd like to make sure I understand
 why not.


Hi Jim,
I agree with Mattias - snapshots are the way to achieve this.
The bit you might, perhaps, have missed is the _clone_ requirement
so you can have read and write access:

# zfs snapshot sink/[EMAIL PROTECTED]
# zfs clone sink/[EMAIL PROTECTED] sink/newcopyofdata

Or if you do want to use zfs send/recv

# zfs snapshot sink/[EMAIL PROTECTED]
# zfs send -R sink/[EMAIL PROTECTED]  | zfs recv -d newzpool/dataset



 Mattias Pantzare wrote:
 2008/7/20 James Mauro [EMAIL PROTECTED]:
   
 Is there an optimal method of making a complete copy of a ZFS, aside from 
 the conventional methods (tar, cpio)?

 We have an existing ZFS that was not created with the optimal recordsize.
 We wish to create a new ZFS with the optimal recordsize (8k), and copy
 all the data from the existing ZFS to the new ZFS.

 Obviously, we know how to do this using conventional utilities and commands.

 Is there a ZFS-specific method for doing that beats the heck of out tar, 
 etc?
 (RTFM indicates there is not; I R'd the FM :^).
 
 Use zfs send | zfs receive if you wish to keep your snapshots or if
 you will be doing the copy several times. You can send just the
 changes between two snapshots.

 (zfs send is in the FM :-)

   
 This may or may not be a copy to the same zpool, and I'd also be interested 
 in
 knowing of that makes a difference (I do not think it does)?
 
 It does not.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [RFC] Improved versioned pointer algorithms

2008-07-20 Thread Akhilesh Mritunjai
 On Monday 14 July 2008 08:29, Akhilesh Mritunjai
 wrote:
  Writable snapshots are called clones in zfs. So
 infact, you have
  trees of snapshots and clones. Snapshots are
 read-only, and you can
  create any number of writable clones from a
 snapshot, that behave
  like a normal filesystem and you can again take
 snapshots of the
  clones. 
 
 So if I snapshot a filesystem, then clone it, then
 delete a file
 from both the clone and the original filesystem, the
 presence
 of the snapshot will prevent the file blocks from
 being recovered,
 and there is no way I can get rid of those blocks
 short of deleting
 both the clone and the snapshot.  Did I get that
 right?

Right. Snapshots are immutable. Isn't this the whole point of a snapshot ?

FS1(file1) - Snapshot1 (file1)

delete FS1-file1 : Snapshot1-File1 is still intact

Snapshot1(file1) - CloneFs1(file1)

delete CloneFS1-file1 : Snapshot1-File1 is still intact (snapshot is 
immutable)

There is lot of information in zfs docs on zfs community. For low level info, 
you may refer to ZFS on disc format document.

Regards
- Akhilesh
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Formatting Problem of ZFS Adm Guide (pdf)

2008-07-20 Thread Akhilesh Mritunjai
Evince likes to fuzz a number of PDFs. I too can't seem to nail the problems, 
but it seems that a number of PDFs from SUN have this problem (very wrong 
character spacing), and they all have been generated using FrameMaker. PDFs 
generated using TeX/LaTeX are *usually* ok.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss