On Fri, 26 Jun 2009, Richard Elling wrote:
All the tools I have used show no IO problems. I think the problem is
memory but I am unsure on how to troubleshoot it.
Look for latency, not bandwidth. iostat will show latency at the
device level.
Unfortunately, the effect may not be all that obv
On Fri, Jun 26, 2009 at 6:04 PM, Bob
Friesenhahn wrote:
> On Fri, 26 Jun 2009, Scott Meilicke wrote:
>
>> I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI),
>> and got nearly identical results to having the disks on iSCSI:
>
> Both of them are using TCP to access the server.
>
> As others have mentioned, it would be easier to take a stab at this if there
> is >some more data to look at.
>
>Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs
>>etc info.
>
>Can you provide zpool status output?
>
>As far as checking ls performance, just to remov
On Fri, 26 Jun 2009, NightBird wrote:
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for
this vdev. My system is a built for maximize capacity using large stripes, not
performance.
All the tools I have used show no IO problems.
I think the problem is memor
NightBird wrote:
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for
this vdev. My system is a built for maximize capacity using large stripes, not
performance.
All the tools I have used show no IO problems.
I think the problem is memory but I am unsure on
On Fri, 26 Jun 2009, Scott Meilicke wrote:
I ran the RealLife iometer profile on NFS based storage (vs. SW
iSCSI), and got nearly identical results to having the disks on
iSCSI:
Both of them are using TCP to access the server.
So it appears NFS is doing syncs, while iSCSI is not (See my earl
As others have mentioned, it would be easier to take a stab at this if there is
some more data to look at.
Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs
etc info.
Can you provide zpool status output?
As far as checking ls performance, just to remove name servic
>NightBird wrote:
>> Hello,
>> We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This
>> >gives us 19TB of useable space on each pool. The server has 2 x quad core
>> cpu, >16GB RAM and are running b117. Average load is 4 and we use a log ot
>> CIFS.
>>
>> We notice ZFS is
NightBird wrote:
Hello,
We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This
gives us 19TB of useable space on each pool. The server has 2 x quad core cpu,
16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS.
We notice ZFS is slow. Even a simple 'l
[Adding context]
>> Hi Scott,
>>
>> Why do you assume there is a IO problem?
>> I know my setup is unusual because of the large pool size. However, I have
>> not seen any evidence this is a problem for my workload.
>> prstat does not show any IO wait.
>
>The pool size isn't the issue, it's the
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for
this vdev. My system is a built for maximize capacity using large stripes, not
performance.
All the tools I have used show no IO problems.
I think the problem is memory but I am unsure on how to troubleshoot
NightBird wrote:
[please keep enough context so you post makes sense to the mail list]
Hi Scott,
Why do you assume there is a IO problem?
I know my setup is unusual because of the large pool size. However, I have not
seen any evidence this is a problem for my workload.
prstat does not show a
Hi Scott,
Why do you assume there is a IO problem?
I know my setup is unusual because of the large pool size. However, I have not
seen any evidence this is a problem for my workload.
prstat does not show any IO wait.
--
This message posted from opensolaris.org
___
Hi,
When you have a lot of random read/writes, raidz/raidz2 can be fairly slow.
http://blogs.sun.com/roch/entry/when_to_and_not_to
The recommendation is to break the disks into smaller raidz/z2 stripes, thereby
improving IO.
>From the ZFS Best Practices Guide:
http://www.solarisinternals.com/wi
On Fri, Jun 26 at 15:18, NightBird wrote:
Hello,
We have a server with a couple a raid-z2 pools, each with 23x1TB
disks. This gives us 19TB of useable space on each pool. The server
has 2 x quad core cpu, 16GB RAM and are running b117. Average load
is 4 and we use a log ot CIFS.
We notice ZFS i
Hello,
We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This
gives us 19TB of useable space on each pool. The server has 2 x quad core cpu,
16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS.
We notice ZFS is slow. Even a simple 'ls -al' can take 20s
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got
nearly identical results to having the disks on iSCSI:
iSCSI
IOPS: 1003.8
MB/s: 7.8
Avg Latency (s): 27.9
NFS
IOPS: 1005.9
MB/s: 7.9
Avg Latency (s): 29.7
Interesting!
Here is how the pool was behaving during the t
This appears to be the fix related to the ACL's which they seem to throw all of
the ASSERT panics in zfs_fuid.c under even if they have nothing to do with
ACL's; my case being one of those.
Thanks for the pointer though!
-Rob
--
This message posted from opensolaris.org
It's actually worse than that--it's not just "recent CPUs" without VT
support. Very few of Intel's current low-price processors, including
the Q8xxx quad-core desktop chips, have VT support.
On Wed, Jun 24, 2009 at 12:09 PM, roland wrote:
>>Dennis is correct in that there are significant areas wh
>Do you mean that it would be faster to read compressed data than uncompressed
>data, or it would be faster to read compressed data than to write it?
yes, because read needs more less CPU time, and the I/O is the same with write.
Do you test it in other environment? likely, increase the server m
Hi Tertius,
I think you are saying that you have an OpenSolaris system with a
one-disk root pool and a 6-way RAIDZ non-root pool.
You could create root pool snapshots and send them over to the non-root
pool or to a pool on another system. Then, consider purchasing another
disk for a mirrored
I have one drive that I'm running OpenSolaris on and a 6-drive RAIDZ.
Unfortunately I don't have another drive to mirror the OS drive, so I was
wondering what the best way to back up that drive is. Can I mirror it onto a
file on the RAIDZ, or will this cause problems before the array is loaded w
> > So if you get such a board be sure to avoid Samsung 750GB and
> > 1TB disks. Samsung never aknowledged the bug, nor have they released
> > a firmware update. And nVidia never said anything about it either.
[...]
> I'm a Hitachi disk user myself, and they work swell. The Seagates I have
> in
Volker A. Brandt wrote:
The MCP55 is the chipset currently in use in the Sun X2200 M2 series of
servers.
... which has big problems with certain Samsung SATA disks. :-(
So if you get such a board be sure to avoid Samsung 750GB and
1TB disks. Samsung never aknowledged the bug, nor have th
> The MCP55 is the chipset currently in use in the Sun X2200 M2 series of
> servers.
... which has big problems with certain Samsung SATA disks. :-(
So if you get such a board be sure to avoid Samsung 750GB and
1TB disks. Samsung never aknowledged the bug, nor have they released
a firmware updat
25 matches
Mail list logo