On Thu, Oct 17, 2013 at 01:11:32AM +0100, Saso Kiselkov wrote:
> On 10/17/13 1:03 AM, Matthew Ahrens wrote:
> > On Wed, Oct 16, 2013 at 5:00 PM, Steven Hartland
> >     How about the case where the admin has specifically sized a smaller
> >     zfs_arc_max to keep ZFS / ARC memory requirements down as they want
> >     the memory for other uses, and there is no L2ARC.
> > 
> >     In this case sizing the hash based of the machine physmem could counter
> >     act this and hence cause a problem, could it not?
> > 
> >     I know its extreme but for example a machine with 256GB of ram but
> >     zfs_arc_max set to 1GB you'd be allocating 256MB of that as the hash
> >     size, which is surely a massive waste as you wouldn't need 256MB of
> >     hash for just 1GB of ARC buffers?
> > 
> >     Am I still barking up the wrong tree?
> > 
> > 
> > They can dynamically change arc_c_max after we've booted, which could
> > leave the hash table much too small, if it was sized based on what
> > zfs_arc_max was when we booted.
> > 
> > I'd say keep it simple until we see a problem.
> 
> +1.

I agree. Also, if the admin is changing the default of "arc_c_max", then
they can also change the size of the hash table (right?) if they feel
it's necessary.

-- 
Cheers, Prakash

> 
> -- 
> Saso
> 
> 
> 
> -------------------------------------------
> illumos-zfs
> Archives: https://www.listbox.com/member/archive/182191/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182191/23963346-4bb55813
> Modify Your Subscription: 
> https://www.listbox.com/member/?member_id=23963346&id_secret=23963346-89c22f02
> Powered by Listbox: http://www.listbox.com
_______________________________________________
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to