[zfs-discuss] Fwd: Replicating many home directories between hosts

2008-12-18 Thread Scott Williamson
On Thu, Dec 18, 2008 at 4:57 PM, Ian Collins i...@ianshome.com wrote:


 Is anyone out there replicating a thousand or more ZFS filesystems between
 hosts using zfs send/receive?


I did this with about 2000 data sets on 2x x4500s with Solaris 10U5 that was
patched. Most directories had just a copy of skel in them, but a few with a
gigabyte or two.

I have been attempting to do this, but I keep producing toxic streams that
 panic the receiving host.  So far, about 1 in 1500 (2 out of about 3000)
 incremental steams appear toxic.


Never had a panic. I had to keep the receiving side parent data set
unmounted. i.e. zfs set mountpoint=legacy, on newer builds you can set none.

- scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mismatched replication level question

2008-12-09 Thread Scott Williamson
When I attempt to create a 46 disk pool with 5 and 6 disk raidz vdevs, I get
the following message:

mismatched replication level: both 5-way and 6-way raidz vdevs are present
mismatched replication level: both 6-way and 5-way raidz vdevs are present

I expect this is correct.[1]  But what does it mean for performance or other
issues? Why am I being warned?

The factory config for x4500s had the a raidz 5 and 6 disk vdev layout.

[1] http://docs.sun.com/app/docs/doc/819-5461/gazgc?a=view
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS+NFS4 strange timestamps on file creation

2008-12-04 Thread Scott Williamson
Has anyone seen files created on a linux client with negative or zero
creation timestamps on zfs+nfs exported datasets?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS+NFS4 strange timestamps on file creation

2008-12-04 Thread Scott Williamson
On Thu, Dec 4, 2008 at 4:52 PM, Ed Spencer [EMAIL PROTECTED] wrote:
 Yes, I've seen them on nfs filesystems on solaris10 using a Netapp nfs
 server.
 Here's a link to a solution that I just implemented on a solaris10
 server:
 https://equoria.net/index.php/Value_too_large_for_defined_data_type

I tried that and saw no change. I can ls the files on my linux client
and solaris 10 server without getting that error message Value too
large for defined data type both before setting that tune option and
after.

I am going to blame the application for now.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Scott Williamson
Try it and tell us if it works :)

It might have hooks into the specific controller driver.
On Mon, Dec 1, 2008 at 1:45 PM, Joe S [EMAIL PROTECTED] wrote:

 I read Ben Rockwood's blog post about Thumpers and SMART
 (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd
 package only work on a Thumper? Can I use this on my snv_101 system
 with AMD 64 bit processor and nVidia SATA?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-12-01 Thread Scott Williamson
Hi,

On Mon, Dec 1, 2008 at 3:37 PM, Eric Hill [EMAIL PROTECTED] wrote:
 Any thoughts on how come Solaris/id isn't seeing the full group list for the 
 user?

Do an ldapsearch and dump the attributes for the group. If it is using
memberuid to list the members solaris should work, if you are using
uniquemember then it will not work.

As far as I remember.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-27 Thread Scott Williamson
I have solaris 10 set to resolve user information from my directory (ldap).
I only get primary group information, not secondary. We use edirectory via
ldap and the attribute for group membersip is not the one that solaris looks
for.

If you run the id username on the box, does it show the users secondary
groups?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-13 Thread Scott Williamson
On Thu, Nov 13, 2008 at 7:35 AM, Ross [EMAIL PROTECTED] wrote:


 PS.  Adam, if it is possible to get this image working under ESX and
 VirtualBox, it would be good if Sun could publish instructions for running
 it under those systems.


Or instructions on how to dd the boot disk bits to the boot disk on ones
thumper? :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-12 Thread Scott Williamson
On Tue, Nov 11, 2008 at 12:52 PM, Adam Leventhal [EMAIL PROTECTED] wrote:

 On Nov 11, 2008, at 9:38 AM, Bryan Cantrill wrote:

  Just to throw some ice-cold water on this:

  1.  It's highly unlikely that we will ever support the x4500 -- only the
 x4540 is a real possibility.



 And to warm things up a bit: there's already an upgrade path from the
 x4500 to the x4540 so that would be required before any upgrade to the
 equivalent of the Sun Storage 7210.


Why exactly will this not run on an x4500? The idea behind buying them was
ZFS would run on any x86 hardware and the x4500 would run more than just
Solaris. This flexibility is important.

I can certainly understand wanting to have full control of hardware to
software for an integrated NAS device, but this looks like a nice management
application built on top of OpenSolaris and other open source software.

What hardware is required to upgrade the x4500 to the same functionality?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OpenStorage GUI

2008-11-11 Thread Scott Williamson
Hi,

Is this 
softwarehttp://www.sun.com/storage/disk_systems/unified_storage/features.jspavailable
for people who already have thumpers?

--
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-10 Thread Scott Williamson
I have an open ticket to have these putback into Solaris 10.

On Fri, Nov 7, 2008 at 3:24 PM, Ian Collins [EMAIL PROTECTED] wrote:

 Brent Jones wrote:
  Theres been a couple threads about this now, tracked some bug
 ID's/ticket:
 
  6333409
  6418042
 I see these are fixed in build 102.

 Are they targeted to get back to Solaris 10 via a patch?

 If not, is it worth escalating the issue with support to get a patch?

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-10 Thread Scott Williamson
If anyone out there has a support contract with sun that covers Solaris 10
support. Feel free to email me and/or sun and have them add you to my
support case.

The Sun Case is 66104157 and I am seeking to have 6333409 and 6418042
putback into Solaris 10.

CR 6712788 was closed as a duplicate of CR 6421958, the fix for which is
scheduled to be included in Update 6.
On Mon, Nov 10, 2008 at 12:24 PM, Scott Williamson 
[EMAIL PROTECTED] wrote:

 I have an open ticket to have these putback into Solaris 10.


 On Fri, Nov 7, 2008 at 3:24 PM, Ian Collins [EMAIL PROTECTED] wrote:

 Brent Jones wrote:
  Theres been a couple threads about this now, tracked some bug
 ID's/ticket:
 
  6333409
  6418042
 I see these are fixed in build 102.

 Are they targeted to get back to Solaris 10 via a patch?

 If not, is it worth escalating the issue with support to get a patch?

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving zfs send performance

2008-10-20 Thread Scott Williamson
On Mon, Oct 20, 2008 at 1:52 AM, Victor Latushkin
[EMAIL PROTECTED]wrote

 Indeed. For example, less than a week ago fix for the following two CRs
 (along with some others) was put back into Solaris Nevada:

 6333409 traversal code should be able to issue multiple reads in parallel
 6418042 want traversal in depth-first pre-order for quicker 'zfs send'


That is helpful Victor. Does anyone have a full list of CRs that I can
provide to sun support? I have tried searching the bugs database, but I
didn't even find those two on my own.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Scott Williamson
Hi All,

I have opened a ticket with sun support #66104157 regarding zfs send /
receive and will let you know what I find out.

Keep in mind that this is for Solaris 10 not opensolaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Scott Williamson
On Fri, Oct 17, 2008 at 2:48 PM, Richard Elling [EMAIL PROTECTED]wrote:

 Keep in mind that any changes required for Solaris 10 will first
 be available in OpenSolaris, including any changes which may
 have already been implemented.


For me (who uses SOL10) it is the only way I can get information about what
bugs and changes have been identified and helps me get stuff from
opensolaris into sol10. The last support ticket resulted in a solaris iSCSI
target to windows initiator patch to solaris 10 that made iSCSI targets on
ZFS actually work for us.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Improving zfs send performance

2008-10-16 Thread Scott Williamson
On Wed, Oct 15, 2008 at 9:37 PM, Brent Jones [EMAIL PROTECTED] wrote:


 Scott,

 Can you tell us the configuration that you're using that is working for
 you?
 Were you using RaidZ, or RaidZ2? I'm wondering what the sweetspot is
 to get a good compromise in vdevs and usable space/performance


I used RaidZ with 4x5 disk and 4x6 disk vdevs in one pool with two hot
spares. This is very similar to how the pre-installed OS shipped from sun.
Also note that I am using ssh as the transfer method.

I have not tried mbuffer with this configuration as in testing with initial
home directories of ~14GB in size it was not needed.

This configuration seems to be similar to Carsten Aulbert's evaluation,
without mbuffer in the pipe.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Scott Williamson
Hi Carsten,

You seem to be using dd for write testing. In my testing I noted that there
was a large difference in write speed between using dd to write from
/dev/zero and using other files. Writing from /dev/zero always seemed to be
fast, reaching the maximum of ~200MB/s and using cp which would perform
poorler the fewer the vdevs.

This also impacted the zfs send speed, as with fewer vdevs in RaidZ2 the
disks seemed to spend most of their time seeking during the send.

On Thu, Oct 16, 2008 at 1:27 AM, Carsten Aulbert [EMAIL PROTECTED]
 wrote:

 Some time ago I made some tests to find this:

 (1) create a new zpool
 (2) Copy user's home to it (always the same ~ 25 GB IIRC)
 (3) zfs send to /dev/null
 (4) evaluate  continue loop

 I did this for fully mirrored setups, raidz as well as raidz2, the
 results were mixed:


 https://n0.aei.uni-hannover.de/cgi-bin/twiki/view/ATLAS/ZFSBenchmarkTest#ZFS_send_performance_relevant_fo

 The culprit here might be that in retrospect this seemed like a good
 home filesystem, i.e. one which was quite fast.

 If you don't want to bother with the table:

 Mirrored setup never exceeded 58 MB/s and was getting faster the more
 small mirrors you used.

 RaidZ had its sweetspot with a configuration of '6 6 6 6 6 6 5 5', i.e.
 6 or 5 disks per RaidZ and 8 vdevs

 RaidZ2 finally was best at '10 9 9 9 9', i.e. 5 vdevs but not much worse
 with only 3, i.e. what we are currently using to get more storage space
 (gains us about 2 TB/box).

 Cheers

 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Scott Williamson
Hi All,

Just want to note that I had the same issue with zfs send + vdevs that had
11 drives in them on a X4500. Reducing the count of drives per zvol cleared
this up.

One vdev is IOPS limited to the speed of one drive in that vdev, according
to this post http://opensolaris.org/jive/thread.jspa?threadID=74033 (see
comment from ptribble.)

On Wed, Oct 15, 2008 at 3:07 PM, Carsten Aulbert [EMAIL PROTECTED]
 wrote:

 Hi Richard,

 Richard Elling wrote:
  Since you are reading, it depends on where the data was written.
  Remember, ZFS dynamic striping != RAID-0.
  I would expect something like this if the pool was expanded at some
  point in time.

 No, the RAID was set-up in one go right after jumpstarting the box.

  (2) The disks should be able to perform much much faster than they
  currently output data at, I believe it;s 2008 and not 1995.
 
 
  X4500?  Those disks are good for about 75-80 random iops,
  which seems to be about what they are delivering.  The dtrace
  tool, iopattern, will show the random/sequential nature of the
  workload.
 

 I need to read about his a bit and will try to analyze it.

  (3) The four cores of the X4500 are dying of boredom, i.e. idle 95% all
  the time.
 
  Has anyone a good idea, where the bottleneck could be? I'm running out
  of ideas.
 
 
  I would suspect the disks.  30 second samples are not very useful
  to try and debug such things -- even 1 second samples can be
  too coarse.  But you should take a look at 1 second samples
  to see if there is a consistent I/O workload.
  -- richard
 

 Without doing too much statistics (yet, if needed I can easily do that)
 it looks like these:


   capacity operationsbandwidth
 pool used  avail   read  write   read  write
 --  -  -  -  -  -  -
 atlashome   3.54T  17.3T256  0  7.97M  0
  raidz2 833G  6.00T  0  0  0  0
c0t0d0  -  -  0  0  0  0
c1t0d0  -  -  0  0  0  0
c4t0d0  -  -  0  0  0  0
c6t0d0  -  -  0  0  0  0
c7t0d0  -  -  0  0  0  0
c0t1d0  -  -  0  0  0  0
c1t1d0  -  -  0  0  0  0
c4t1d0  -  -  0  0  0  0
c5t1d0  -  -  0  0  0  0
c6t1d0  -  -  0  0  0  0
c7t1d0  -  -  0  0  0  0
c0t2d0  -  -  0  0  0  0
c1t2d0  -  -  0  0  0  0
c4t2d0  -  -  0  0  0  0
c5t2d0  -  -  0  0  0  0
  raidz21.29T  5.52T133  0  4.14M  0
c6t2d0  -  -117  0   285K  0
c7t2d0  -  -114  0   279K  0
c0t3d0  -  -106  0   261K  0
c1t3d0  -  -114  0   282K  0
c4t3d0  -  -118  0   294K  0
c5t3d0  -  -125  0   308K  0
c6t3d0  -  -126  0   311K  0
c7t3d0  -  -118  0   293K  0
c0t4d0  -  -119  0   295K  0
c1t4d0  -  -120  0   298K  0
c4t4d0  -  -120  0   291K  0
c6t4d0  -  -106  0   257K  0
c7t4d0  -  - 96  0   236K  0
c0t5d0  -  -109  0   267K  0
c1t5d0  -  -114  0   282K  0
  raidz21.43T  5.82T123  0  3.83M  0
c4t5d0  -  -108  0   242K  0
c5t5d0  -  -104  0   236K  0
c6t5d0  -  -104  0   239K  0
c7t5d0  -  -107  0   245K  0
c0t6d0  -  -108  0   248K  0
c1t6d0  -  -106  0   245K  0
c4t6d0  -  -108  0   250K  0
c5t6d0  -  -112  0   258K  0
c6t6d0  -  -114  0   261K  0
c7t6d0  -  -110  0   253K  0
c0t7d0  -  -109  0   248K  0
c1t7d0  -  -109  0   246K  0
c4t7d0  -  -108  0   243K  0
c5t7d0  -  -108  0   244K  0
c6t7d0  -  -106  0   240K  0
c7t7d0  -  -109  0   244K  0
 --  -  -  -  -  -  -

 the iops vary between about 70 - 140, interesting bit is that the first
 raidz2 does not get any hits at all :(

 Cheers

 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-10 Thread Scott Williamson
On Thu, Oct 9, 2008 at 6:56 PM, BJ Quinn [EMAIL PROTECTED] wrote:

 So, here's what I tried - first of all, I set the backup FS to readonly.
  That resulted in the same error message.  Strange, how could something have
 changed since the last snapshot if I CONSCIOUSLY didn't change anything or
 CD into it or anything AND it was set to readonly?


To this, when I 'zfs send' differential snapshots to another pool on Sol10U5
I see this message on some file systems and not others. no cd etc.

The only solution I found was to set the parent file system to mount=legacy
and not mount it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-06 Thread Scott Williamson
Speaking of this, is there a list anywhere that details what we can expect
to see for (zfs) updates in S10U6?

On Mon, Oct 6, 2008 at 2:44 PM, Richard Elling [EMAIL PROTECTED]wrote:

 Do you have a lot of snapshots?  If so, CR 6612830 could be contributing.
 Alas, many such fixes are not yet available in S10.
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss