All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
> arc::print -tad
{
...
c02e29e8 uint64_t size = 0t299008
c02e29f0 uint64_t p = 0t16588228608
c02e29f8 uint64_t c = 0t33176457216
c02e2a00 uint64_t c_min = 0t1070318720
c02e2a08
Will try that now...
/jim
[EMAIL PROTECTED] wrote:
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you set
Following a reboot:
> arc::print -tad
{
. . .
c02e29e8 uint64_t size = 0t299008
c02e29f0 uint64_t p = 0t16588228608
c02e29f8 uint64_t c = 0t33176457216
c02e2a00 uint64_t c_min = 0t1070318720
c02e2a08 uint64_t c_max = 0t33176457216
. . .
}
>
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you set c_max, set c to the same value as c_max
and set p to ha
How/when did you configure arc_c_max?
Immediately following a reboot, I set arc.c_max using mdb,
then verified reading the arc structure again.
arc.p is supposed to be
initialized to half of arc.c. Also, I assume that there's a reliable
test case for reproducing this problem?
Yep. I'm
Something else to consider, depending upon how you set arc_c_max, you
may just want to set arc_c and arc_p at the same time. If you try
setting arc_c_max, and then setting arc_c to arc_c_max, and then set
arc_p to arc_c / 2, do you still get this problem?
-j
On Thu, Mar 15, 2007 at 05:18:12PM -0
Gar. This isn't what I was hoping to see. Buffers that aren't
available for eviction aren't listed in the lsize count. It looks like
the MRU has grown to 10Gb and most of this could be successfully
evicted.
The calculation for determining if we evict from the MRU is in
arc_adjust() and looks so
> ARC_mru::print -d size lsize
size = 0t10224433152
lsize = 0t10218960896
> ARC_mfu::print -d size lsize
size = 0t303450112
lsize = 0t289998848
> ARC_anon::print -d size
size = 0
>
So it looks like the MRU is running at 10GB...
What does this tell us?
Thanks,
/jim
[EMAIL PROTECTED] wrote:
I don't Solaris dom0 does Pacifica (amd-v) yet.
That would rule out windows for now.
You can run centOS zones on SXCR.
That just leaves freebsd (which hasn't got fantastic xen support either,
despite Kip Macys excellent work).
Unless you've got an app that needs that, zones sound like a much sa
This seems a bit strange. What's the workload, and also, what's the
output for:
> ARC_mru::print size lsize
> ARC_mfu::print size lsize
and
> ARC_anon::print size
For obvious reasons, the ARC can't evict buffers that are in use.
Buffers that are available to be evicted should be on the mru or mf
Hi Jim,
My understanding is that the DLNC can consume quite a bit of memory
too, and the ARC limitations (and memory culler) don't clean the DNLC
yet. So if you're working with a lot of smaller files, you can still
go way over your ARC limit. Anyone, please correct me if I've got that
wrong.
-J
FYI - After a few more runs, ARC size hit 10GB, which is now 10X c_max:
> arc::print -tad
{
. . .
c02e29e8 uint64_t size = 0t10527883264
c02e29f0 uint64_t p = 0t16381819904
c02e29f8 uint64_t c = 0t1070318720
c02e2a00 uint64_t c_min = 0t1070318720
f
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_minAnyway
> What makes you think that these arrays work with
> mpxio? Every array does
> not automatically work.
They are working rock solid with mpxio and UFS!
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
In the meantime, the SUN supporter did figure out that zdb does not work
because zdb uses the information from /etc/zfs/zpool.cache. However,
I did use "zpool -R" to import the pool, which did not update
/etc/zfs/zpool.cache. Is there another method to map a dataset
number to a filesystem?
Han
15 matches
Mail list logo