I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value.   I could be wrong about
this; however, next time you set c_max, set c to the same value as c_max
and set p to half of c.  Let me know if this addresses the problem or
not.

-j

> >How/when did you configure arc_c_max?  
> Immediately following a reboot, I set arc.c_max using mdb,
> then verified reading the arc structure again.
> >arc.p is supposed to be
> >initialized to half of arc.c.  Also, I assume that there's a reliable
> >test case for reproducing this problem?
> >  
> Yep. I'm using a x4500 in-house to sort out performance of a customer test
> case that uses mmap. We acquired the new DIMMs to bring the
> x4500 to 32GB, since the workload has a 64GB working set size,
> and we were clobbering a 16GB thumper. We wanted to see how doubling
> memory may help.
> 
> I'm trying clamp the ARC size because for mmap-intensive workloads,
> it seems to hurt more than help (although, based on experiments up to this
> point, it's not hurting a lot).
> 
> I'll do another reboot, and run it all down for you serially...
> 
> /jim
> 
> >Thanks,
> >
> >-j
> >
> >On Thu, Mar 15, 2007 at 06:57:12PM -0400, Jim Mauro wrote:
> >  
> >>    
> >>>ARC_mru::print -d size lsize
> >>>      
> >>size = 0t10224433152
> >>lsize = 0t10218960896
> >>    
> >>>ARC_mfu::print -d size lsize
> >>>      
> >>size = 0t303450112
> >>lsize = 0t289998848
> >>    
> >>>ARC_anon::print -d size
> >>>      
> >>size = 0
> >>    
> >>So it looks like the MRU is running at 10GB...
> >>
> >>What does this tell us?
> >>
> >>Thanks,
> >>/jim
> >>
> >>
> >>
> >>[EMAIL PROTECTED] wrote:
> >>    
> >>>This seems a bit strange.  What's the workload, and also, what's the
> >>>output for:
> >>>
> >>> 
> >>>      
> >>>>ARC_mru::print size lsize
> >>>>ARC_mfu::print size lsize
> >>>>   
> >>>>        
> >>>and
> >>> 
> >>>      
> >>>>ARC_anon::print size
> >>>>   
> >>>>        
> >>>For obvious reasons, the ARC can't evict buffers that are in use.
> >>>Buffers that are available to be evicted should be on the mru or mfu
> >>>list, so this output should be instructive.
> >>>
> >>>-j
> >>>
> >>>On Thu, Mar 15, 2007 at 02:08:37PM -0400, Jim Mauro wrote:
> >>> 
> >>>      
> >>>>FYI - After a few more runs, ARC size hit 10GB, which is now 10X c_max:
> >>>>
> >>>>
> >>>>   
> >>>>        
> >>>>>arc::print -tad
> >>>>>     
> >>>>>          
> >>>>{
> >>>>. . .
> >>>>  ffffffffc02e29e8 uint64_t size = 0t10527883264
> >>>>  ffffffffc02e29f0 uint64_t p = 0t16381819904
> >>>>  ffffffffc02e29f8 uint64_t c = 0t1070318720
> >>>>  ffffffffc02e2a00 uint64_t c_min = 0t1070318720
> >>>>  ffffffffc02e2a08 uint64_t c_max = 0t1070318720
> >>>>. . .
> >>>>
> >>>>Perhaps c_max does not do what I think it does?
> >>>>
> >>>>Thanks,
> >>>>/jim
> >>>>
> >>>>
> >>>>Jim Mauro wrote:
> >>>>   
> >>>>        
> >>>>>Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
> >>>>>(update 3). All file IO is mmap(file), read memory segment, unmap, 
> >>>>>close.
> >>>>>
> >>>>>Tweaked the arc size down via mdb to 1GB. I used that value because
> >>>>>c_min was also 1GB, and I was not sure if c_max could be larger than
> >>>>>c_min....Anyway, I set c_max to 1GB.
> >>>>>
> >>>>>After a workload run....:
> >>>>>     
> >>>>>          
> >>>>>>arc::print -tad
> >>>>>>       
> >>>>>>            
> >>>>>{
> >>>>>. . .
> >>>>>ffffffffc02e29e8 uint64_t size = 0t3099832832
> >>>>>ffffffffc02e29f0 uint64_t p = 0t16540761088
> >>>>>ffffffffc02e29f8 uint64_t c = 0t1070318720
> >>>>>ffffffffc02e2a00 uint64_t c_min = 0t1070318720
> >>>>>ffffffffc02e2a08 uint64_t c_max = 0t1070318720
> >>>>>. . .
> >>>>>
> >>>>>"size" is at 3GB, with c_max at 1GB.
> >>>>>
> >>>>>What gives? I'm looking at the code now, but was under the impression
> >>>>>c_max would limit ARC growth. Granted, it's not a factor of 10, and
> >>>>>it's certainly much better than the out-of-the-box growth to 24GB
> >>>>>(this is a 32GB x4500), so clearly ARC growth is being limited, but it
> >>>>>still grew to 3X c_max.
> >>>>>
> >>>>>Thanks,
> >>>>>/jim
> >>>>>_______________________________________________
> >>>>>zfs-discuss mailing list
> >>>>>zfs-discuss@opensolaris.org
> >>>>>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >>>>>     
> >>>>>          
> >>>>_______________________________________________
> >>>>zfs-discuss mailing list
> >>>>zfs-discuss@opensolaris.org
> >>>>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >>>>   
> >>>>        
> >>_______________________________________________
> >>zfs-discuss mailing list
> >>zfs-discuss@opensolaris.org
> >>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >>    
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to