Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Bob Friesenhahn
On Fri, 26 Jun 2009, Richard Elling wrote: All the tools I have used show no IO problems. I think the problem is memory but I am unsure on how to troubleshoot it. Look for latency, not bandwidth. iostat will show latency at the device level. Unfortunately, the effect may not be all that obv

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Brent Jones
On Fri, Jun 26, 2009 at 6:04 PM, Bob Friesenhahn wrote: > On Fri, 26 Jun 2009, Scott Meilicke wrote: > >> I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), >> and got nearly identical results to having the disks on iSCSI: > > Both of them are using TCP to access the server. >

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread NightBird
> As others have mentioned, it would be easier to take a stab at this if there > is >some more data to look at. > >Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs >>etc info. > >Can you provide zpool status output? > >As far as checking ls performance, just to remov

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Bob Friesenhahn
On Fri, 26 Jun 2009, NightBird wrote: Thanks Ian. I read the best practices and undestand the IO limitation I have created for this vdev. My system is a built for maximize capacity using large stripes, not performance. All the tools I have used show no IO problems. I think the problem is memor

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Richard Elling
NightBird wrote: Thanks Ian. I read the best practices and undestand the IO limitation I have created for this vdev. My system is a built for maximize capacity using large stripes, not performance. All the tools I have used show no IO problems. I think the problem is memory but I am unsure on

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Bob Friesenhahn
On Fri, 26 Jun 2009, Scott Meilicke wrote: I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got nearly identical results to having the disks on iSCSI: Both of them are using TCP to access the server. So it appears NFS is doing syncs, while iSCSI is not (See my earl

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread William D. Hathaway
As others have mentioned, it would be easier to take a stab at this if there is some more data to look at. Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs etc info. Can you provide zpool status output? As far as checking ls performance, just to remove name servic

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread NightBird
>NightBird wrote: >> Hello, >> We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This >> >gives us 19TB of useable space on each pool. The server has 2 x quad core >> cpu, >16GB RAM and are running b117. Average load is 4 and we use a log ot >> CIFS. >> >> We notice ZFS is

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Ian Collins
NightBird wrote: Hello, We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This gives us 19TB of useable space on each pool. The server has 2 x quad core cpu, 16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS. We notice ZFS is slow. Even a simple 'l

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread NightBird
[Adding context] >> Hi Scott, >> >> Why do you assume there is a IO problem? >> I know my setup is unusual because of the large pool size. However, I have >> not seen any evidence this is a problem for my workload. >> prstat does not show any IO wait. > >The pool size isn't the issue, it's the

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread NightBird
Thanks Ian. I read the best practices and undestand the IO limitation I have created for this vdev. My system is a built for maximize capacity using large stripes, not performance. All the tools I have used show no IO problems. I think the problem is memory but I am unsure on how to troubleshoot

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Ian Collins
NightBird wrote: [please keep enough context so you post makes sense to the mail list] Hi Scott, Why do you assume there is a IO problem? I know my setup is unusual because of the large pool size. However, I have not seen any evidence this is a problem for my workload. prstat does not show a

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread NightBird
Hi Scott, Why do you assume there is a IO problem? I know my setup is unusual because of the large pool size. However, I have not seen any evidence this is a problem for my workload. prstat does not show any IO wait. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Scott Meilicke
Hi, When you have a lot of random read/writes, raidz/raidz2 can be fairly slow. http://blogs.sun.com/roch/entry/when_to_and_not_to The recommendation is to break the disks into smaller raidz/z2 stripes, thereby improving IO. >From the ZFS Best Practices Guide: http://www.solarisinternals.com/wi

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Eric D. Mudama
On Fri, Jun 26 at 15:18, NightBird wrote: Hello, We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This gives us 19TB of useable space on each pool. The server has 2 x quad core cpu, 16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS. We notice ZFS i

[zfs-discuss] slow ls or slow zfs

2009-06-26 Thread NightBird
Hello, We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This gives us 19TB of useable space on each pool. The server has 2 x quad core cpu, 16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS. We notice ZFS is slow. Even a simple 'ls -al' can take 20s

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Scott Meilicke
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got nearly identical results to having the disks on iSCSI: iSCSI IOPS: 1003.8 MB/s: 7.8 Avg Latency (s): 27.9 NFS IOPS: 1005.9 MB/s: 7.9 Avg Latency (s): 29.7 Interesting! Here is how the pool was behaving during the t

Re: [zfs-discuss] BugID formally known as 6746456

2009-06-26 Thread Rob Healey
This appears to be the fix related to the ACL's which they seem to throw all of the ASSERT panics in zfs_fuid.c under even if they have nothing to do with ACL's; my case being one of those. Thanks for the pointer though! -Rob -- This message posted from opensolaris.org

Re: [zfs-discuss] zfs on 32 bit?

2009-06-26 Thread Scott Laird
It's actually worse than that--it's not just "recent CPUs" without VT support. Very few of Intel's current low-price processors, including the Q8xxx quad-core desktop chips, have VT support. On Wed, Jun 24, 2009 at 12:09 PM, roland wrote: >>Dennis is correct in that there are significant areas wh

Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput?

2009-06-26 Thread Chookiex
>Do you mean that it would be faster to read compressed data than uncompressed >data, or it would be faster to read compressed data than to write it? yes, because read needs more less CPU time, and the I/O is the same with write. Do you test it in other environment? likely, increase the server m

Re: [zfs-discuss] Backing up OS drive?

2009-06-26 Thread Cindy . Swearingen
Hi Tertius, I think you are saying that you have an OpenSolaris system with a one-disk root pool and a 6-way RAIDZ non-root pool. You could create root pool snapshots and send them over to the non-root pool or to a pool on another system. Then, consider purchasing another disk for a mirrored

[zfs-discuss] Backing up OS drive?

2009-06-26 Thread Tertius Lydgate
I have one drive that I'm running OpenSolaris on and a 6-drive RAIDZ. Unfortunately I don't have another drive to mirror the OS drive, so I was wondering what the best way to back up that drive is. Can I mirror it onto a file on the RAIDZ, or will this cause problems before the array is loaded w

Re: [zfs-discuss] SPARC SATA, please.

2009-06-26 Thread Volker A. Brandt
> > So if you get such a board be sure to avoid Samsung 750GB and > > 1TB disks. Samsung never aknowledged the bug, nor have they released > > a firmware update. And nVidia never said anything about it either. [...] > I'm a Hitachi disk user myself, and they work swell. The Seagates I have > in

Re: [zfs-discuss] SPARC SATA, please.

2009-06-26 Thread Erik Trimble
Volker A. Brandt wrote: The MCP55 is the chipset currently in use in the Sun X2200 M2 series of servers. ... which has big problems with certain Samsung SATA disks. :-( So if you get such a board be sure to avoid Samsung 750GB and 1TB disks. Samsung never aknowledged the bug, nor have th

Re: [zfs-discuss] SPARC SATA, please.

2009-06-26 Thread Volker A. Brandt
> The MCP55 is the chipset currently in use in the Sun X2200 M2 series of > servers. ... which has big problems with certain Samsung SATA disks. :-( So if you get such a board be sure to avoid Samsung 750GB and 1TB disks. Samsung never aknowledged the bug, nor have they released a firmware updat