Re: [zfs-discuss] Backing up a ZFS pool

2010-01-16 Thread Edward Ned Harvey
 What is the best way to back up a zfs pool for recovery?  Recover
 entire pool or files from a pool...  Would you use snapshots and
 clones?
 
 I would like to move the backup to a different disk and not use
 tapes.

Personally, I use zfs send | zfs receive to an external disk.  Initially a
full image, and later incrementals.  This way, you've got the history of
what previous snapshots you've received on the external disk, it's instantly
available if you connect to a new computer, and you can restore either the
whole FS, or a single file if you want.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Edward Ned Harvey
 I am considering building a modest sized storage system with zfs. Some
 of the data on this is quite valuable, some small subset to be backed
 up forever, and I am evaluating back-up options with that in mind.

You don't need to store the zfs send data stream on your backup media.
This would be annoying for the reasons mentioned - some risk of being able
to restore in future (although that's a pretty small risk) and inability to
restore with any granularity, i.e. you have to restore the whole FS if you
restore anything at all.

A better approach would be zfs send and pipe directly to zfs receive on
the external media.  This way, in the future, anything which can read ZFS
can read the backup media, and you have granularity to restore either the
whole FS, or individual things inside there.

Plus, the only way to guarantee the integrity of a zfs send data stream is
to perform a zfs receive on that data stream.  So by performing a
successful receive, you've guaranteed the datastream is not corrupt.  Yet.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-16 Thread Jeffry Molanus
Thx all, I understand now.

BR, Jeffry
 
 if an application requests a synchronous write then it is commited to
 ZIL immediately, once it is done the IO is acknowledged to application.
 But data written to ZIL is still in memory as part of an currently open
 txg and will be committed to a pool with no need to read anything from
 ZIL. Then there is an optimization you wrote above so data block not
 necesarilly need to be writen just pointers which point to them.
 
 Now it is slightly more complicated as you need to take into account
 logbias property and a possibility that a dedicated zil device could be
 present.
 
 As Neil wrote zfs will read from ZIL only if while importing a pool it
 will be detected that there is some data in ZIL which hasn't been
 commited to a pool yet which could happen due to system reset, power
 loss or devices suddenly disappearing.
 
 --
 Robert Milkowski
 http://milek.blogspot.com
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool fragmentation issues? (dovecot)

2010-01-16 Thread Damon Atkins
In my previous post I was refering more to mdbox (Multi-dbox) rather than dbox, 
however I beleive the meta data is store with the mail msg in version 1.x and 
2.x meta is not updated within the msg which would be better for ZFS.

What I am saying is msg per file which is not updated is better for snapshots.  
I belive 2.x version of single-dbox should be better (ie meta data is no longer 
stored with the msg) compared with 1.x dbox for snapshots. 

Cheers
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-16 Thread dick hoogendijk
On Sat, 2010-01-16 at 07:24 -0500, Edward Ned Harvey wrote:

 Personally, I use zfs send | zfs receive to an external disk.  Initially a
 full image, and later incrementals.

Do these incrementals go into the same filesystem that received the
original zfs stream?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Joerg Schilling
Edward Ned Harvey sola...@nedharvey.com wrote:

  I am considering building a modest sized storage system with zfs. Some
  of the data on this is quite valuable, some small subset to be backed
  up forever, and I am evaluating back-up options with that in mind.

 You don't need to store the zfs send data stream on your backup media.

NO, zfs send is not a backup.

From a backup, you could restore individual files.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Thomas Burgess



 NO, zfs send is not a backup.

 From a backup, you could restore individual files.

 Jörg


I disagree.

It is a backup.  It's just not an enterprise backup solution
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Simon Breden
Which consumer-priced 1.5TB drives do people currently recommend?

I had zero read/write/checksum errors so far in 2 years with my trusty old 
Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of 
drives that are big, reliable and cheap.

As of Jan 2010 it seems the price sweet spot is the 1.5TB drives.

As I had a lot of success with Western Digital drives I thought I would stick 
with WD.

However, this time I might have to avoid Western Digital (see below), so I 
wondered which other recent drives people have found to be decent drives.

WD15EADS:
The model I was looking at was the WD15EADS.
The older 4-platter WD15EADS-00R6B0 revision seems to work OK, from what I 
found, but I prefer fewer platters from noise, vibration, heat  reliability 
perspectives.
The newer 3-platter WD15EADS-00P8B0 revision seems to have serious problems - 
see links below.

WD15EARS:
Also, very recently WD brought out a 3-platter WD15EARS-00Z5B1 revision, based 
on 'Advanced format' where it uses 4KB sector sizes instead of the old 
traditional 512 byte sector sizes.
Again, these drives seem to have serious issues - see links below.
Does ZFS handle this new 4KB sector size automatically and transparently, or 
does something need to be done for it work?

Reference:
1. On synology site, seems like older 4-platter 1.5TB EADS OK 
(WD15EADS-00R6B0), but newer 3 platter EADS have problems (WD15EADS-00P8B0):
http://forum.synology.com/enu/viewtopic.php?f=151t=19131sid=c1c446863595a5addb8652a4af2d09ca
2. A mac user has problems with WD15EARS-00Z5B1:
http://community.wdc.com/t5/Desktop/WD-1-5TB-Green-drives-Useful-as-door-stops/td-p/1217/page/2
  (WD 1.5TB Green drives - Useful as door stops)
http://community.wdc.com/t5/Desktop/WDC-WD15EARS-00Z5B1-awful-performance/m-p/5242
  (WDC WD15EARS-00Z5B1 awful performance)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Freddie Cash
We're in the process of upgrading our storage servers from Seagate RE.2 500 GB 
and WD 500 GB black drives to WD 1.5 TB green drives (ones with 512B 
sectors).  So far, no problems to report.

We've replaced 6 out of 8 drives in one raidz2 vdev so far (1 drive each 
weekend).  re-silver times have dropped from over 80 hours for the first drive 
to just under 60 for the 6th (pool is 10TB with 150 GB free).  No checksum 
errors of any kind reported so far, no drive timeouts reported by the 
controller, everything is working as per normal.

We're running ZFSv13 on FreeBSD 7.2-STABLE.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-16 Thread Simon Breden
Which drive model/revision number are you using?
I presume you are using the 4-platter version: WD15EADS-00R6B0, but perhaps I 
am wrong.

Also did you run WDTLER.EXE on the drives first, to hasten error reporting 
times?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Toby Thain


On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:

I am considering building a modest sized storage system with zfs.  
Some

of the data on this is quite valuable, some small subset to be backed
up forever, and I am evaluating back-up options with that in mind.


You don't need to store the zfs send data stream on your backup  
media.
This would be annoying for the reasons mentioned - some risk of  
being able
to restore in future (although that's a pretty small risk) and  
inability to
restore with any granularity, i.e. you have to restore the whole FS  
if you

restore anything at all.

A better approach would be zfs send and pipe directly to zfs  
receive on
the external media.  This way, in the future, anything which can  
read ZFS
can read the backup media, and you have granularity to restore  
either the

whole FS, or individual things inside there.


There have also been comments about the extreme fragility of the data  
stream compared to other archive formats. In general it is strongly  
discouraged for these purposes.


--Toby




Plus, the only way to guarantee the integrity of a zfs send data  
stream is

to perform a zfs receive on that data stream.  So by performing a
successful receive, you've guaranteed the datastream is not  
corrupt.  Yet.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the disk a member of a zpool?

2010-01-16 Thread Morten-Christian Bernson
Thanks for the tip both of you.  The zdb approach seems viable.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Mike Gerdts
On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain t...@telegraphics.com.au wrote:
 On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:

 I am considering building a modest sized storage system with zfs. Some
 of the data on this is quite valuable, some small subset to be backed
 up forever, and I am evaluating back-up options with that in mind.

 You don't need to store the zfs send data stream on your backup media.
 This would be annoying for the reasons mentioned - some risk of being able
 to restore in future (although that's a pretty small risk) and inability
 to
 restore with any granularity, i.e. you have to restore the whole FS if you
 restore anything at all.

 A better approach would be zfs send and pipe directly to zfs receive
 on
 the external media.  This way, in the future, anything which can read ZFS
 can read the backup media, and you have granularity to restore either the
 whole FS, or individual things inside there.

 There have also been comments about the extreme fragility of the data stream
 compared to other archive formats. In general it is strongly discouraged for
 these purposes.


Yet it is used in ZFS flash archives on Solaris 10 and are slated for
use in the successor to flash archives.  This initial proposal seems
to imply using the same mechanism for a system image backup (instead
of just system provisioning).

http://mail.opensolaris.org/pipermail/caiman-discuss/2010-January/015909.html

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Toby Thain


On 16-Jan-10, at 6:51 PM, Mike Gerdts wrote:

On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain  
t...@telegraphics.com.au wrote:

On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:

I am considering building a modest sized storage system with  
zfs. Some
of the data on this is quite valuable, some small subset to be  
backed
up forever, and I am evaluating back-up options with that in  
mind.


You don't need to store the zfs send data stream on your backup  
media.
This would be annoying for the reasons mentioned - some risk of  
being able
to restore in future (although that's a pretty small risk) and  
inability

to
restore with any granularity, i.e. you have to restore the whole  
FS if you

restore anything at all.

A better approach would be zfs send and pipe directly to zfs  
receive

on
the external media.  This way, in the future, anything which can  
read ZFS
can read the backup media, and you have granularity to restore  
either the

whole FS, or individual things inside there.


There have also been comments about the extreme fragility of the  
data stream
compared to other archive formats. In general it is strongly  
discouraged for

these purposes.



Yet it is used in ZFS flash archives on Solaris 10


I can see the temptation, but isn't it a bit under-designed? I think  
Mr Nordin might have ranted about this in the past...


--Toby



and are slated for
use in the successor to flash archives.  This initial proposal seems
to imply using the same mechanism for a system image backup (instead
of just system provisioning).

http://mail.opensolaris.org/pipermail/caiman-discuss/2010-January/ 
015909.html


--
Mike Gerdts
http://mgerdts.blogspot.com/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] I can't seem to get the pool to export...

2010-01-16 Thread Travis Tabbal
r...@nas:~# zpool export -f raid
cannot export 'raid': pool is busy

I've disabled all the services I could think of. I don't see anything accessing 
it. I also don't see any of the filesystems mounted with mount or zfs mount. 
What's the deal?  This is not the rpool, so I'm not booted off it or anything 
like that. I'm on snv_129. 

I'm attempting to move the main storage to a new pool. I created the new pool, 
used zfs send | zfs recv for the filesystems. That's all fine. The plan was 
to export both pools, and use the import to rename them. I've got the new pool 
exported, but the older one refuses to export. 

Is there some way to get the system to tell me what's using the pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Snapshot that won't go away.

2010-01-16 Thread Ian Collins

I have a Solaris 10 update 6 system with a snapshot I can't remove.

zfs destroy -f snap  reports the device as being busy.  fuser doesn't 
shore any process using the filesystem and it isn't shared.


I can unmount the filesystem OK.

Any clues or suggestions of bigger sticks to hit it with?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I can't seem to get the pool to export...

2010-01-16 Thread Travis Tabbal
Hmm... got it working after a reboot. Odd that it had problems before that. I 
was able to rename the pools and the system seems to be running well now. 
Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get 
copied over with the zfs send/recv. I didn't have that many filesystems though, 
so it wasn't too bad to reconfigure them.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss