Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Tiernan OToole
Thanks all! I will check out FreeNAS and see what it can do... I will also
check my RAID Card and see if it can work with JBOD... fingers crossed...
The machine has a couple internal SATA ports (think there are 2, could be
4) so i was thinking of using those for boot disks and SSDs later...

As a follow up question: Data Deduplication: The machine, to start, will
have about 5Gb  RAM. I read somewhere that 20TB storage would require about
8GB RAM, depending on block size... Since i dont know block sizes, yet (i
store a mix of VMs, TV Shows, Movies and backups on the NAS) I am not sure
how much memory i will need (my estimate is 10TB RAW (8TB usable?) in a
ZRAID1 pool, and then 3TB RAW in a striped pool). If i dont have enough
memory now, can i enable DeDupe at a later stage when i add memory? Also,
if i pick FreeBSD now, and want to move to, say, Nexenta, is that possible?
Assuming the drives are just JBOD drives (to be confirmed) could they just
get imported?

Thanks.


On Mon, Feb 25, 2013 at 6:11 PM, Tim Cook t...@cook.ms wrote:




 On Mon, Feb 25, 2013 at 7:57 AM, Volker A. Brandt v...@bb-c.de wrote:

 Tim Cook writes:
   I need something that will allow me to share files over SMB (3 if
   possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would
   like something i can manage easily and something that works with
   the Dell...
 
  All of them should provide the basic functionality you're looking
  for.
   None of them will provide SMB3 (at all) or AFP (without a third
  party package).

 FreeNAS has AFP built-in, including a Time Machine discovery method.

 The latest FreeNAS is still based on Samba 3.x, but they are aware
 of 4.x and will probably integrate it at some point in the future.
 Then you should have SMB3.  I don't know how far along they are...


 Best regards -- Volker



 FreeNAS comes with a package pre-installed to add AFP support.  There is
 no native AFP support in FreeBSD and by association FreeNAS.

 --Tim





-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Sašo Kiselkov
On 02/26/2013 09:33 AM, Tiernan OToole wrote:
 As a follow up question: Data Deduplication: The machine, to start, will
 have about 5Gb  RAM. I read somewhere that 20TB storage would require about
 8GB RAM, depending on block size...

The typical wisdom is that 1TB of dedup'ed data = 1GB of RAM. 5GB of RAM
seems too small for a 20TB pool of dedup'ed data.
Unless you know what you're doing, I'd go with just compression and let
dedup be - compression has known performance and doesn't suffer with
scaling.

 If i dont have enough memory now, can i enable DeDupe at a later stage
 when i add memory?

Yes.

 Also, if i pick FreeBSD now, and want to move to, say, Nexenta, is that
 possible? Assuming the drives are just JBOD drives (to be confirmed)
 could they just get imported?

Yes, that's the whole point of open storage.

I'd also recommend that you go and subscribe to z...@lists.illumos.org,
since this list is going to get shut down by Oracle next month.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Tim Cook
On Mon, Feb 25, 2013 at 10:33 PM, Tiernan OToole lsmart...@gmail.comwrote:

 Thanks all! I will check out FreeNAS and see what it can do... I will also
 check my RAID Card and see if it can work with JBOD... fingers crossed...
 The machine has a couple internal SATA ports (think there are 2, could be
 4) so i was thinking of using those for boot disks and SSDs later...

 As a follow up question: Data Deduplication: The machine, to start, will
 have about 5Gb  RAM. I read somewhere that 20TB storage would require about
 8GB RAM, depending on block size... Since i dont know block sizes, yet (i
 store a mix of VMs, TV Shows, Movies and backups on the NAS) I am not sure
 how much memory i will need (my estimate is 10TB RAW (8TB usable?) in a
 ZRAID1 pool, and then 3TB RAW in a striped pool). If i dont have enough
 memory now, can i enable DeDupe at a later stage when i add memory? Also,
 if i pick FreeBSD now, and want to move to, say, Nexenta, is that possible?
 Assuming the drives are just JBOD drives (to be confirmed) could they just
 get imported?

 Thanks.




Yes, you can move between FreeBSD and Illumos based distros as long as you
are at a compatible zpool version (which they currently are).  I'd avoid
deduplication unless you absolutely need it... it's still a bit of a
kludge.  Stick to compression and your world will be a much happier place.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Tiernan OToole
Thanks again lads. I will take all that info into advice, and will join
that new group also!

Thanks again!

--Tiernan


On Tue, Feb 26, 2013 at 8:44 AM, Tim Cook t...@cook.ms wrote:



 On Mon, Feb 25, 2013 at 10:33 PM, Tiernan OToole lsmart...@gmail.comwrote:

 Thanks all! I will check out FreeNAS and see what it can do... I will
 also check my RAID Card and see if it can work with JBOD... fingers
 crossed... The machine has a couple internal SATA ports (think there are 2,
 could be 4) so i was thinking of using those for boot disks and SSDs
 later...

 As a follow up question: Data Deduplication: The machine, to start, will
 have about 5Gb  RAM. I read somewhere that 20TB storage would require about
 8GB RAM, depending on block size... Since i dont know block sizes, yet (i
 store a mix of VMs, TV Shows, Movies and backups on the NAS) I am not sure
 how much memory i will need (my estimate is 10TB RAW (8TB usable?) in a
 ZRAID1 pool, and then 3TB RAW in a striped pool). If i dont have enough
 memory now, can i enable DeDupe at a later stage when i add memory? Also,
 if i pick FreeBSD now, and want to move to, say, Nexenta, is that possible?
 Assuming the drives are just JBOD drives (to be confirmed) could they just
 get imported?

 Thanks.




 Yes, you can move between FreeBSD and Illumos based distros as long as you
 are at a compatible zpool version (which they currently are).  I'd avoid
 deduplication unless you absolutely need it... it's still a bit of a
 kludge.  Stick to compression and your world will be a much happier place.

 --Tim





-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Robert Milkowski
Solaris 11.1 (free for non-prod use). 

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole
Sent: 25 February 2013 14:58
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS Distro Advice

 

Good morning all.

 

My home NAS died over the weekend, and it leaves me with a lot of spare
drives (5 2Tb and 3 1Tb disks). I have a Dell Poweredge 2900 Server sitting
in the house, which has not been doing much over the last while (bought it a
few years back with the intent of using it as a storage box, since it has 8
Hot Swap drive bays) and i am now looking at building the NAS using ZFS...

 

But, now i am confused as to what OS to use... OpenIndiana? Nexenta?
FreeNAS/FreeBSD? 

 

I need something that will allow me to share files over SMB (3 if possible),
NFS, AFP (for Time Machine) and iSCSI. Ideally, i would like something i can
manage easily and something that works with the Dell... 

 

Any recommendations? Any comparisons to each? 

 

Thanks.


 

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sata mirror slower than single disk

2013-02-26 Thread hagai
for what is worth.. 
I had the same problem and found the answer here - 
http://forums.freebsd.org/showthread.php?t=27207


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Gary Driggs
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:

I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.


Whose description still reads, everything ZFS running on illumos-based
distributions.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Sašo Kiselkov
On 02/26/2013 03:51 PM, Gary Driggs wrote:
 On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
 
 I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
 this list is going to get shut down by Oracle next month.
 
 Whose description still reads, everything ZFS running on illumos-based
 distributions.

We've never dismissed any topic or issue as not our problem. All
sensible ZFS-related discussion is welcome and taken seriously.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Eugen Leitl
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
 On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
 
 I'd also recommend that you go and subscribe to z...@lists.illumos.org, since

I can't seem to find this list. Do you have an URL for that?
Mailman, hopefully?

 this list is going to get shut down by Oracle next month.
 
 
 Whose description still reads, everything ZFS running on illumos-based
 distributions.
 
 -Gary

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Sašo Kiselkov
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
 On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
 On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:

 I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
 
 I can't seem to find this list. Do you have an URL for that?
 Mailman, hopefully?

http://wiki.illumos.org/display/illumos/illumos+Mailing+Lists

--
Saso

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Eugen Leitl
On Tue, Feb 26, 2013 at 06:01:39PM +0100, Sašo Kiselkov wrote:
 On 02/26/2013 05:57 PM, Eugen Leitl wrote:
  On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
  On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
 
  I'd also recommend that you go and subscribe to z...@lists.illumos.org, 
  since
  
  I can't seem to find this list. Do you have an URL for that?
  Mailman, hopefully?
 
 http://wiki.illumos.org/display/illumos/illumos+Mailing+Lists

Oh, it's the illumos-zfs one. Had me confused.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sata mirror slower than single disk

2013-02-26 Thread Paul Kraus
Be careful when testing ZFS with ozone, I ran a bunch of stats many 
years ago that produced results that did not pass a basic sanity check. There 
was *something* about the ozone test data that ZFS either did not like or liked 
very much, depending on the specific test.

I eventually wrote my own very crude tool to test exactly what our 
workload was and started getting results that matched the reality we saw.

On Jul 17, 2012, at 4:18 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
wrote:

 On Tue, 17 Jul 2012, Michael Hase wrote:
 
 To work around these caching effects just use a file  2 times the size of 
 ram, iostat then shows the numbers really coming from disk. I always test 
 like this. a re-read rate of 8.2 GB/s is really just memory bandwidth, but 
 quite impressive ;-)
 
 Ok, the iozone benchmark finally completed.  The results do suggest that 
 reading from mirrors substantially improves the throughput. This is 
 interesting since the results differ (better than) from my 'virgin mount' 
 test approach:
 
 Command line used: iozone -a -i 0 -i 1 -y 64 -q 512 -n 8G -g 256G
 
  KB  reclen   write rewritereadreread
 8388608  64  572933 1008668  6945355  7509762
 8388608 128 2753805 2388803  6482464  7041942
 8388608 256 2508358 2331419  2969764  3045430
 8388608 512 2407497 2131829  3021579  3086763
16777216  64  671365  879080  6323844  6608806
16777216 128 1279401 2286287  6409733  6739226
16777216 256 2382223 2211097  2957624  3021704
16777216 512 2237742 2179611  3048039  3085978
33554432  64  933712  699966  6418428  6604694
33554432 128  459896  431640  6443848  6546043
33554432 256  90  430989  2997615  3026246
33554432 512  427158  430891  3042620  3100287
67108864  64  426720  427167  6628750  6738623
67108864 128  419328  422581  153  6743711
67108864 256  419441  419129  3044352  3056615
67108864 512  431053  417203  3090652  3112296
   134217728  64  417668   55434   759351   760994
   134217728 128  409383  400433   759161   765120
   134217728 256  408193  405868   763892   766184
   134217728 512  408114  403473   761683   766615
   268435456  64  418910   55239   768042   768498
   268435456 128  408990  399732   763279   766882
   268435456 256  413919  399386   760800   764468
   268435456 512  410246  403019   766627   768739
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Richard Elling
On Feb 26, 2013, at 12:33 AM, Tiernan OToole lsmart...@gmail.com wrote:

 Thanks all! I will check out FreeNAS and see what it can do... I will also 
 check my RAID Card and see if it can work with JBOD... fingers crossed... The 
 machine has a couple internal SATA ports (think there are 2, could be 4) so i 
 was thinking of using those for boot disks and SSDs later... 
 
 As a follow up question: Data Deduplication: The machine, to start, will have 
 about 5Gb  RAM. I read somewhere that 20TB storage would require about 8GB 
 RAM, depending on block size... Since i dont know block sizes, yet (i store a 
 mix of VMs, TV Shows, Movies and backups on the NAS)

Consider using different policies for different data. For traditional file 
systems, you
had relatively few policy options: readonly, nosuid, quota, etc. With ZFS, 
dedup and
compression are also policy options. In your case, dedup for your media is not 
likely
to be a good policy, but dedup for your backups could be a win (unless you're 
using
something that already doesn't backup duplicate data -- eg most backup 
utilities).
A way to approach this is to think of your directory structure and create file 
systems
to match the policies. For example:
/home/richard = compressed (default top-level, since properties are 
inherited)
/home/richard/media = compressed
/home/richard/backup = compressed + dedup

 -- richard

 I am not sure how much memory i will need (my estimate is 10TB RAW (8TB 
 usable?) in a ZRAID1 pool, and then 3TB RAW in a striped pool). If i dont 
 have enough memory now, can i enable DeDupe at a later stage when i add 
 memory? Also, if i pick FreeBSD now, and want to move to, say, Nexenta, is 
 that possible? Assuming the drives are just JBOD drives (to be confirmed) 
 could they just get imported?
 
 Thanks.
 
 
 On Mon, Feb 25, 2013 at 6:11 PM, Tim Cook t...@cook.ms wrote:
 
 
 
 On Mon, Feb 25, 2013 at 7:57 AM, Volker A. Brandt v...@bb-c.de wrote:
 Tim Cook writes:
   I need something that will allow me to share files over SMB (3 if
   possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would
   like something i can manage easily and something that works with
   the Dell...
 
  All of them should provide the basic functionality you're looking
  for.
   None of them will provide SMB3 (at all) or AFP (without a third
  party package).
 
 FreeNAS has AFP built-in, including a Time Machine discovery method.
 
 The latest FreeNAS is still based on Samba 3.x, but they are aware
 of 4.x and will probably integrate it at some point in the future.
 Then you should have SMB3.  I don't know how far along they are...
 
 
 Best regards -- Volker
 
 
 
 FreeNAS comes with a package pre-installed to add AFP support.  There is no 
 native AFP support in FreeBSD and by association FreeNAS.  
 
 --Tim
  
 
 
 
 -- 
 Tiernan O'Toole
 blog.lotas-smartman.net
 www.geekphotographer.com
 www.tiernanotoole.ie
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--

richard.ell...@richardelling.com
+1-760-896-4422









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SVM ZFS

2013-02-26 Thread Morris Hooten
Besides copying data from /dev/md/dsk/x volume manager filesystems to new 
zfs filesystems
does anyone know of any zfs conversion tools to make the 
conversion/migration from svm to zfs 
easier?

Thanks


Morris Hooten
Unix SME
Integrated Technology Delivery
mhoo...@us.ibm.com
Office: 720-342-5614___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins

Robert Milkowski wrote:


Solaris 11.1 (free for non-prod use).



But a ticking bomb if you use a cache device.

--

Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Robert Milkowski

 
 Robert Milkowski wrote:
 
  Solaris 11.1 (free for non-prod use).
 
 
 But a ticking bomb if you use a cache device.


It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).

Then, I'm sure there are other bugs which are fixed in S11 and not in
Illumos (and vice-versa).

-- 
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins

Robert Milkowski wrote:

Robert Milkowski wrote:

Solaris 11.1 (free for non-prod use).


But a ticking bomb if you use a cache device.


It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).

Then, I'm sure there are other bugs which are fixed in S11 and not in
Illumos (and vice-versa).



There may well be, but in seven+ years of using ZFS, this was the first 
one to cost me a pool.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Jim Klimov

On 2013-02-26 21:30, Morris Hooten wrote:

Besides copying data from /dev/md/dsk/x volume manager filesystems to
new zfs filesystems
does anyone know of any zfs conversion tools to make the
conversion/migration from svm to zfs
easier?


Do you mean something like a tool that would change metadata around
your userdata in-place and turn an SVM volume into a ZFS pool, like
Windows' built-in FAT - NTFS conversion? No, there's nothing like it.

However, depending on your old system's configuration, you might have
to be careful about choice of copy programs. Namely, if your setup
used some ACLs (beyond standard POSIX access bits), then you'd need
ACL-aware copying tools. Sun tar and cpio are some (see manpages about
usage examples), rsync 3.0.10 was recently reported to support Solaris
ACLs as well, but I didn't test that myself. GNU tar and cpio are known
to do a poor job with intimate Solaris features, though they might be
superior for some other tasks. Basic (Sun, not GNU) cp and mv should
work correctly too.

I most often use rsync -avPHK /src/ /dst/, especially if there are
no ACLs to think about, or the target's inheritable ACLs are acceptable
(and overriding them with original's access rights might even be wrong).

Also, before you do the migration, think ahead of the storage and IO
requirements for the datasets. For example, log files are often huge,
compress into orders of magnitude less, and the IOPS loss might be
negligible (or even boost, due to smaller hardware IOs and less seeks).
Randomly accessed (written) data might not like heavier compressions.
Databases or VM images might benefit from smaller maximum block sizes,
although often these are not made 1:1 with DB block size, but rather
balance about 4 DB entries in an FS block of 32Kb or 64Kb (from what
I saw suggested on the list).

Singly-written data, like OS images, might benefit from compression as
well. If you have local zones, you might benefit from carrying over
(or installing from scratch) one as a typical example DUMMY into a
dedicated dataset, then cloning it into many actual zone roots as you'd
need, and rsync -cavPHK --delete-after from originals into this
dataset - this way only differing files (or parts thereof) would be
transferred, giving you the benefits of cloning (space saving) without
the downsides of deduplication.

Also, for data in the zones (such as database files, tomcat/glassfish
application server roots, etc.) you might like to use separate dataset
hierarchies mounted via delegation of a root ZFS dataset into zones.
This way your zoneroots would live a separate life from application
data and non-packaged applications, which might simplify backups, etc.
and you might be able to store these pieces in different pools (i.e.
SSDs for some data and HDDs for other - though most list members would
rightfully argue in favor of L2ARC on the SSDs).

HTH,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Paul Kraus
On Feb 26, 2013, at 6:05 PM, Jim Klimov jimkli...@cos.ru wrote:

 On 2013-02-26 21:30, Morris Hooten wrote:
 Besides copying data from /dev/md/dsk/x volume manager filesystems to
 new zfs filesystems
 does anyone know of any zfs conversion tools to make the
 conversion/migration from svm to zfs
 easier?

 However, depending on your old system's configuration, you might have
 to be careful about choice of copy programs. Namely, if your setup
 used some ACLs (beyond standard POSIX access bits), then you'd need
 ACL-aware copying tools. Sun tar and cpio are some (see manpages about
 usage examples), rsync 3.0.10 was recently reported to support Solaris
 ACLs as well, but I didn't test that myself. GNU tar and cpio are known
 to do a poor job with intimate Solaris features, though they might be
 superior for some other tasks. Basic (Sun, not GNU) cp and mv should
 work correctly too.

Under Solaris 10 I found 'cp -pr' to be the both most reliable and fastest way 
to move data into, out of, and between ZFS datasets.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Jim Klimov

Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
a recommended way of such transition ;)

I think it should be aware of all intimacies of the FS, including
sparse files which reportedly may puzzle some other archivers.
Although with any sort of ZFS compression (including lightweight
zle) zero-filled blocks should translate into zero IOs. (Maybe
some metadata would appear, to address the holes, however).
With proper handling of sparse files you don't write any of that
voidness into the FS and you don't process anything on reads.

Have fun,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Paul Kraus
On Feb 26, 2013, at 6:19 PM, Jim Klimov jimkli...@cos.ru wrote:

 Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
 a recommended way of such transition ;)

The last time I looked at using ufsdump/ufsrestore for this ufsrestore 
was NOT aware of ZFS ACL semantics. That was under Solaris 10, but I would be 
surprised if the ufsrestore code has changed since then.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sata mirror slower than single disk

2013-02-26 Thread Bob Friesenhahn

On Tue, 26 Feb 2013, hagai wrote:


for what is worth..
I had the same problem and found the answer here -
http://forums.freebsd.org/showthread.php?t=27207


Given enough sequential I/O requests, zfs mirrors behave every much 
like RAID-0 for reads.  Sequential prefetch is very important in order 
to avoid the latencies.


While this script may not work perfectly as is for FreeBSD, it was 
very good at discovering a zfs performance bug (since corrected) and 
is still an interesting exercise for zfs to see how ZFS ARC caching 
helps for re-reads.  See 
http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh;. 
The script will exercise an initial uncached read from disks, and then 
a (hopefully) cached re-read from disks.  I think that it serves as a 
useful benchmark.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Alfredo De Luca
what about Solaris live upgrade?




On Wed, Feb 27, 2013 at 10:36 AM, Paul Kraus p...@kraus-haus.org wrote:

 On Feb 26, 2013, at 6:19 PM, Jim Klimov jimkli...@cos.ru wrote:

  Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
  a recommended way of such transition ;)

 The last time I looked at using ufsdump/ufsrestore for this
 ufsrestore was NOT aware of ZFS ACL semantics. That was under Solaris 10,
 but I would be surprised if the ufsrestore code has changed since then.

 --
 Paul Kraus
 Deputy Technical Director, LoneStarCon 3
 Sound Coordinator, Schenectady Light Opera Company

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
*Alfredo*
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Bob Friesenhahn

On Tue, 26 Feb 2013, Gary Driggs wrote:


On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:

  I'd also recommend that you go and subscribe to z...@lists.illumos.org, 
since this list is going to get shut
  down by Oracle next month.


Whose description still reads, everything ZFS running on illumos-based 
distributions.


Even FreeBSD's zfs is now based on zfs from Illumos.  FreeBSD and 
Linux zfs developers contribute fixes back to zfs in Illumos.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Bob Friesenhahn

On Tue, 26 Feb 2013, Richard Elling wrote:

Consider using different policies for different data. For traditional file 
systems, you
had relatively few policy options: readonly, nosuid, quota, etc. With ZFS, 
dedup and
compression are also policy options. In your case, dedup for your media is not 
likely
to be a good policy, but dedup for your backups could be a win (unless you're 
using
something that already doesn't backup duplicate data -- eg most backup 
utilities).
A way to approach this is to think of your directory structure and create file 
systems
to match the policies. For example:


I am finding that rsync with the right options (to directly 
block-overwrite) plus zfs snapshots is providing me with pretty 
amazing deduplication for backups without even enabling 
deduplication in zfs.  Now backup storage goes a very long way.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Ian Collins

Alfredo De Luca wrote:
On Wed, Feb 27, 2013 at 10:36 AM, Paul Kraus p...@kraus-haus.org 
mailto:p...@kraus-haus.org wrote:


On Feb 26, 2013, at 6:19 PM, Jim Klimov jimkli...@cos.ru
mailto:jimkli...@cos.ru wrote:

 Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
 a recommended way of such transition ;)

The last time I looked at using ufsdump/ufsrestore for
this ufsrestore was NOT aware of ZFS ACL semantics. That was under
Solaris 10, but I would be surprised if the ufsrestore code has
changed since then.




what about Solaris live upgrade?



It's been a long time, but I'm sure LU only supports UFS-ZFS for the 
root pool.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins

Bob Friesenhahn wrote:

On Tue, 26 Feb 2013, Richard Elling wrote:

Consider using different policies for different data. For traditional file 
systems, you
had relatively few policy options: readonly, nosuid, quota, etc. With ZFS, 
dedup and
compression are also policy options. In your case, dedup for your media is not 
likely
to be a good policy, but dedup for your backups could be a win (unless you're 
using
something that already doesn't backup duplicate data -- eg most backup 
utilities).
A way to approach this is to think of your directory structure and create file 
systems
to match the policies. For example:

I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without even enabling
deduplication in zfs.  Now backup storage goes a very long way.


We do the same for all of our legacy operating system backups. Take a 
snapshot then do an rsync and an excellent way of maintaining 
incremental backups for those.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Bob Friesenhahn

On Wed, 27 Feb 2013, Ian Collins wrote:

I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without even enabling
deduplication in zfs.  Now backup storage goes a very long way.


We do the same for all of our legacy operating system backups. Take a 
snapshot then do an rsync and an excellent way of maintaining incremental 
backups for those.


Magic rsync options used:

  -a --inplace --no-whole-file --delete-excluded

This causes rsync to overwrite the file blocks in place rather than 
writing to a new temporary file first.  As a result, zfs COW produces 
primitive deduplication of at least the unchanged blocks (by writing 
nothing) while writing new COW blocks for the changed blocks.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Alfredo De Luca
Not sure


http://docs.oracle.com/cd/E19253-01/821-0438/ggzdo/index.html




On Wed, Feb 27, 2013 at 2:30 PM, Ian Collins i...@ianshome.com wrote:

 Alfredo De Luca wrote:

 On Wed, Feb 27, 2013 at 10:36 AM, Paul Kraus p...@kraus-haus.orgmailto:
 p...@kraus-haus.org wrote:

 On Feb 26, 2013, at 6:19 PM, Jim Klimov jimkli...@cos.ru
 mailto:jimkli...@cos.ru wrote:

  Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
  a recommended way of such transition ;)

 The last time I looked at using ufsdump/ufsrestore for
 this ufsrestore was NOT aware of ZFS ACL semantics. That was under
 Solaris 10, but I would be surprised if the ufsrestore code has
 changed since then.


  what about Solaris live upgrade?


 It's been a long time, but I'm sure LU only supports UFS-ZFS for the root
 pool.

 --
 Ian.


 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
*Alfredo*
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins

Bob Friesenhahn wrote:

On Wed, 27 Feb 2013, Ian Collins wrote:

I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without even enabling
deduplication in zfs.  Now backup storage goes a very long way.

We do the same for all of our legacy operating system backups. Take a
snapshot then do an rsync and an excellent way of maintaining incremental
backups for those.

Magic rsync options used:

-a --inplace --no-whole-file --delete-excluded

This causes rsync to overwrite the file blocks in place rather than
writing to a new temporary file first.  As a result, zfs COW produces
primitive deduplication of at least the unchanged blocks (by writing
nothing) while writing new COW blocks for the changed blocks.


Do these options impact performance or reduce the incremental stream sizes?

I just use -a --delete and the snapshots don't take up much space 
(compared with the incremental stream sizes).


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Jim Klimov

On 2013-02-27 05:36, Ian Collins wrote:

Bob Friesenhahn wrote:

On Wed, 27 Feb 2013, Ian Collins wrote:

I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without even enabling
deduplication in zfs.  Now backup storage goes a very long way.

We do the same for all of our legacy operating system backups. Take a
snapshot then do an rsync and an excellent way of maintaining
incremental
backups for those.

Magic rsync options used:

-a --inplace --no-whole-file --delete-excluded

This causes rsync to overwrite the file blocks in place rather than
writing to a new temporary file first.  As a result, zfs COW produces
primitive deduplication of at least the unchanged blocks (by writing
nothing) while writing new COW blocks for the changed blocks.


Do these options impact performance or reduce the incremental stream sizes?

I just use -a --delete and the snapshots don't take up much space
(compared with the incremental stream sizes).




Well, to be certain, you can create a dataset with a large file in it,
snapshot it, and rsync over a changed variant of the file, snapshot and
compare referenced sizes. If the file was rewritten into a new temporary
one and then renamed over original, you'd likely end up with as much
used storage as for the original file. If only changes are written into
it in-place then you'd use a lot less space (and you'd not see a
.garbledfilename in the directory during the process).

If you use rsync over network to back up stuff, here's an example of
SMF wrapper for rsyncd, and a config sample to make a snapshot after
completion of the rsync session.

http://wiki.openindiana.org/oi/rsync+daemon+service+on+OpenIndiana

HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss