On Mar 24, 2012, at 10:29 PM, Aubrey Li wrote:
Hi,
I'm migrating a webserver(apache+php) from RHEL to solaris. During the
stress testing comparison, I found under the same session number of client
request, CPU% is ~70% on RHEL while CPU% is full on solaris.
After some investigation, zfs
On Sun, Mar 25, 2012 at 3:55 PM, Richard Elling
richard.ell...@richardelling.com wrote:
On Mar 24, 2012, at 10:29 PM, Aubrey Li wrote:
Hi,
I'm migrating a webserver(apache+php) from RHEL to solaris. During the
stress testing comparison, I found under the same session number of client
In general, mixing SATA and SAS directly behind expanders (eg without
SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
In general I'd agree. Just mixing them on the same box can be problematic,
I've noticed - though I think as much as anything that the firmware
on the 3G/s
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a bigger
symptom than locks.
-- richard
On Mar 25, 2012, at 6:24 AM, Aubrey Li wrote:
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl
On Mar 25, 2012, at 6:26 AM, Jeff Bacon wrote:
In general, mixing SATA and SAS directly behind expanders (eg without
SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
In general I'd agree. Just mixing them on the same box can be problematic,
I've noticed - though I think as
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
richard.ell...@richardelling.com wrote:
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a
bigger
symptom than locks.
-- richard
thanks anyway,
On 3/25/12 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
richard.ell...@richardelling.com wrote:
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a
bigger
symptom than
On Mar 25, 2012, at 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
richard.ell...@richardelling.com wrote:
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a
bigger
On Mon, Mar 26, 2012 at 2:10 AM, zfs user zf...@itsbeen.sent.com wrote:
On 3/25/12 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
richard.ell...@richardelling.com wrote:
This is the wrong forum for general purpose performance tuning. So I
won't
continue this
On Mon, Mar 26, 2012 at 2:58 AM, Richard Elling
richard.ell...@richardelling.com wrote:
On Mar 25, 2012, at 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
richard.ell...@richardelling.com wrote:
This is the wrong forum for general purpose performance tuning. So I
On Mon, Mar 26, 2012 at 2:13 AM, Aubrey Li aubrey...@gmail.com wrote:
The problem is, every zfs vnode access need the **same zfs root**
lock. When the number of
httpd processes and the corresponding kernel threads becomes large,
this root lock contention
becomes horrible. This situation does
If you're chasing CPU utilization, specifically %sys (time in the kernel),
I would start with a time-based kernel profile.
#dtrace -n 'profile-997hz /arg0/ { @[stack()] = count(); } tick-60sec {
trunc(@, 20); printa(@0; }'
I would be curious to see where the CPU cycles are being consumed first,
On Mon, Mar 26, 2012 at 3:22 AM, Fajar A. Nugraha w...@fajar.net wrote:
I have ever not seen any issues until I did a comparison with Linux.
So basically you're comparing linux + ext3/4 performance with solaris
+ zfs, on the same hardware? That's not really fair, is it?
If your load is
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro james.ma...@oracle.com wrote:
If you're chasing CPU utilization, specifically %sys (time in the kernel),
I would start with a time-based kernel profile.
#dtrace -n 'profile-997hz /arg0/ { @[stack()] = count(); } tick-60sec {
trunc(@, 20); printa(@0;
On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro james.ma...@oracle.com wrote:
If you're chasing CPU utilization, specifically %sys (time in the kernel),
I would start with a time-based kernel profile.
#dtrace -n 'profile-997hz /arg0/ { @[stack()] =
Hello.
What the best practices for choosing ZFS volume volblocksize setting for
VMware VMFS-5?
VMFS-5 block size is 1Mb. Not sure how it corresponds with ZFS.
Setup details follow:
- 11 pairs of mirrors;
- 600Gb 15k SAS disks;
- SSDs for L2ARC and ZIL
- COMSTAR FC target;
- about 30 virtual
On Mon, Mar 26, 2012 at 11:34 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro james.ma...@oracle.com wrote:
If you're chasing CPU utilization, specifically %sys (time in the kernel),
I would start with
Apologies to the ZFSers, this thread really belongs elsewhere.
On Mar 25, 2012, at 10:11 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 11:34 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro
On Mon, Mar 26, 2012 at 12:19 PM, Richard Elling
richard.ell...@richardelling.com wrote:
Apologies to the ZFSers, this thread really belongs elsewhere.
Some of the info in it is informative for other zfs users as well though :)
Here is the output, I changed to tick-5sec and trunc(@, 5).
No.2
On Mon, Mar 26, 2012 at 1:19 PM, Richard Elling
richard.ell...@richardelling.com wrote:
Apologies to the ZFSers, this thread really belongs elsewhere.
Let me explain below:
Root documentation path of apache is in zfs, you see
it at No.3 at the above dtrace report.
The sort is in reverse
20 matches
Mail list logo