> On Mon, 29 Jun 2009, NightBird wrote:
> >
> > I checked the output of iostat. svc_t is between 5
> and 50, depending on when data is flushed to the disk
> (CIFS write pattern). %b is between 10 and 50.
> > %w is always 0.
> > Example:
> > devicer/sw/s kr/s kw/s wait actv svc_t
> %w
On Mon, 29 Jun 2009, NightBird wrote:
I checked the output of iostat. svc_t is between 5 and 50, depending on when
data is flushed to the disk (CIFS write pattern). %b is between 10 and 50.
%w is always 0.
Example:
devicer/sw/s kr/s kw/s wait actv svc_t %w %b
sd27 31.5 127.0
NightBird wrote:
On Fri, 26 Jun 2009, Richard Elling wrote:
All the tools I have used show no IO problems. I
think the problem is
memory but I am unsure on how to troubleshoot it.
Look for latency, not bandwidth. iostat will show
latency at the
devi
> On Fri, 26 Jun 2009, Richard Elling wrote:
>
> >> All the tools I have used show no IO problems. I
> think the problem is
> >> memory but I am unsure on how to troubleshoot it.
> >
> > Look for latency, not bandwidth. iostat will show
> latency at the
> > device level.
>
> Unfortunately, the
On Fri, 26 Jun 2009, Richard Elling wrote:
All the tools I have used show no IO problems. I think the problem is
memory but I am unsure on how to troubleshoot it.
Look for latency, not bandwidth. iostat will show latency at the
device level.
Unfortunately, the effect may not be all that obv
> As others have mentioned, it would be easier to take a stab at this if there
> is >some more data to look at.
>
>Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs
>>etc info.
>
>Can you provide zpool status output?
>
>As far as checking ls performance, just to remov
On Fri, 26 Jun 2009, NightBird wrote:
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for
this vdev. My system is a built for maximize capacity using large stripes, not
performance.
All the tools I have used show no IO problems.
I think the problem is memor
NightBird wrote:
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for
this vdev. My system is a built for maximize capacity using large stripes, not
performance.
All the tools I have used show no IO problems.
I think the problem is memory but I am unsure on
As others have mentioned, it would be easier to take a stab at this if there is
some more data to look at.
Have you done any ZFS tuning? If so, please provide the /etc/system, adb, zfs
etc info.
Can you provide zpool status output?
As far as checking ls performance, just to remove name servic
>NightBird wrote:
>> Hello,
>> We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This
>> >gives us 19TB of useable space on each pool. The server has 2 x quad core
>> cpu, >16GB RAM and are running b117. Average load is 4 and we use a log ot
>> CIFS.
>>
>> We notice ZFS is
NightBird wrote:
Hello,
We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This
gives us 19TB of useable space on each pool. The server has 2 x quad core cpu,
16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS.
We notice ZFS is slow. Even a simple 'l
[Adding context]
>> Hi Scott,
>>
>> Why do you assume there is a IO problem?
>> I know my setup is unusual because of the large pool size. However, I have
>> not seen any evidence this is a problem for my workload.
>> prstat does not show any IO wait.
>
>The pool size isn't the issue, it's the
Thanks Ian.
I read the best practices and undestand the IO limitation I have created for
this vdev. My system is a built for maximize capacity using large stripes, not
performance.
All the tools I have used show no IO problems.
I think the problem is memory but I am unsure on how to troubleshoot
NightBird wrote:
[please keep enough context so you post makes sense to the mail list]
Hi Scott,
Why do you assume there is a IO problem?
I know my setup is unusual because of the large pool size. However, I have not
seen any evidence this is a problem for my workload.
prstat does not show a
Hi Scott,
Why do you assume there is a IO problem?
I know my setup is unusual because of the large pool size. However, I have not
seen any evidence this is a problem for my workload.
prstat does not show any IO wait.
--
This message posted from opensolaris.org
___
Hi,
When you have a lot of random read/writes, raidz/raidz2 can be fairly slow.
http://blogs.sun.com/roch/entry/when_to_and_not_to
The recommendation is to break the disks into smaller raidz/z2 stripes, thereby
improving IO.
>From the ZFS Best Practices Guide:
http://www.solarisinternals.com/wi
On Fri, Jun 26 at 15:18, NightBird wrote:
Hello,
We have a server with a couple a raid-z2 pools, each with 23x1TB
disks. This gives us 19TB of useable space on each pool. The server
has 2 x quad core cpu, 16GB RAM and are running b117. Average load
is 4 and we use a log ot CIFS.
We notice ZFS i
Hello,
We have a server with a couple a raid-z2 pools, each with 23x1TB disks. This
gives us 19TB of useable space on each pool. The server has 2 x quad core cpu,
16GB RAM and are running b117. Average load is 4 and we use a log ot CIFS.
We notice ZFS is slow. Even a simple 'ls -al' can take 20s
18 matches
Mail list logo