On Tue, Aug 28, 2012 at 12:07 PM, Alan Cox <a...@rice.edu> wrote:
> On 08/27/2012 17:23, Gezeala M. Bacuño II wrote:
>>
>> On Thu, Aug 23, 2012 at 12:02 PM, Alan Cox<a...@rice.edu>  wrote:
>>>
>>> On 08/22/2012 12:09, Gezeala M. Bacuño II wrote:
>>>>
>>>> On Tue, Aug 21, 2012 at 4:24 PM, Alan Cox<a...@rice.edu>   wrote:
>>>>>
>>>>> On 8/20/2012 8:26 PM, Gezeala M. Bacuño II wrote:
>>>>>>
>>>>>> On Mon, Aug 20, 2012 at 9:07 AM, Gezeala M. Bacuño
>>>>>> II<geze...@gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox<a...@rice.edu>   wrote:
>>>>>>>>
>>>>>>>> On 08/18/2012 19:57, Gezeala M. Bacuño II wrote:
>>>>>>>>>
>>>>>>>>> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox<a...@rice.edu>    wrote:
>>>>>>>>>>
>>>>>>>>>> On 08/17/2012 17:08, Gezeala M. Bacuño II wrote:
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox<a...@rice.edu>
>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> vm.kmem_size controls the maximum size of the kernel's heap,
>>>>>>>>>>>> i.e.,
>>>>>>>>>>>> the
>>>>>>>>>>>> region where the kernel's slab and malloc()-like memory
>>>>>>>>>>>> allocators
>>>>>>>>>>>> obtain
>>>>>>>>>>>> their memory.  While this heap may occupy the largest portion of
>>>>>>>>>>>> the
>>>>>>>>>>>> kernel's virtual address space, it cannot occupy the entirety of
>>>>>>>>>>>> the
>>>>>>>>>>>> address
>>>>>>>>>>>> space.  There are other things that must be given space within
>>>>>>>>>>>> the
>>>>>>>>>>>> kernel's
>>>>>>>>>>>> address space, for example, the file system buffer map.
>>>>>>>>>>>>
>>>>>>>>>>>> ZFS does not, however, use the regular file system buffer cache.
>>>>>>>>>>>> The
>>>>>>>>>>>> ARC
>>>>>>>>>>>> takes its place, and the ARC abuses the kernel's heap like
>>>>>>>>>>>> nothing
>>>>>>>>>>>> else.
>>>>>>>>>>>> So, if you are running a machine that only makes trivial use of
>>>>>>>>>>>> a
>>>>>>>>>>>> non-ZFS
>>>>>>>>>>>> file system, like you boot from UFS, but store all of your data
>>>>>>>>>>>> in
>>>>>>>>>>>> ZFS,
>>>>>>>>>>>> then
>>>>>>>>>>>> you can dramatically reduce the size of the buffer map via boot
>>>>>>>>>>>> loader
>>>>>>>>>>>> tuneables and proportionately increase vm.kmem_size.
>>>>>>>>>>>>
>>>>>>>>>>>> Any further increases in the kernel virtual address space size
>>>>>>>>>>>> will,
>>>>>>>>>>>> however, require code changes.  Small changes, but changes
>>>>>>>>>>>> nonetheless.
>>>>>>>>>>>>
>>>>>>>>>>>> Alan
>>>>>>>>>>>>
>>>>>>> <<snip>>
>>>>>>>>>>
>>>>>>>>>> Your objective should be to reduce the value of "sysctl
>>>>>>>>>> vfs.maxbufspace".
>>>>>>>>>> You can do this by setting the loader.conf tuneable
>>>>>>>>>> "kern.maxbcache"
>>>>>>>>>> to
>>>>>>>>>> the
>>>>>>>>>> desired value.
>>>>>>>>>>
>>>>>>>>>> What does your machine currently report for "sysctl
>>>>>>>>>> vfs.maxbufspace"?
>>>>>>>>>>
>>>>>>>>> Here you go:
>>>>>>>>> vfs.maxbufspace: 54967025664
>>>>>>>>> kern.maxbcache: 0
>>>>>>>>
>>>>>>>>
>>>>>>>> Try setting kern.maxbcache to two billion and adding 50 billion to
>>>>>>>> the
>>>>>>>> setting of vm.kmem_size{,_max}.
>>>>>>>>
>>>>>> 2 : 50 ==>>   is this the ratio for further tuning
>>>>>> kern.maxbcache:vm.kmem_size? Is kern.maxbcache also in bytes?
>>>>>>
>>>>> No, this is not a ratio.  Yes, kern.maxbcache is in bytes. Basically,
>>>>> for
>>>>> every byte that you subtract from vfs.maxbufspace, through setting
>>>>> kern.maxbcache, you can add a byte to vm.kmem_size{,_max}.
>>>>>
>>>>> Alan
>>>>>
>>>> Great! Thanks. Are there other sysctls aside from vfs.bufspace that I
>>>> should monitor for vfs.maxbufspace usage? I just want to make sure
>>>> that vfs.maxbufspace is sufficient for our needs.
>>>
>>>
>>> You might keep an eye on "sysctl vfs.bufdefragcnt".  If it starts rapidly
>>> increasing, you may want to increase vfs.maxbufspace.
>>>
>>> Alan
>>>
>> We seem to max out vfs.bufspace in<24hrs uptime. It has been steady
>> at 1999273984 while vfs.bufdefragcnt stays at 0 - which I presume is
>> good. Nevertheless, I will increase kern.maxbcache to 6GB and adjust
>> vm.kmem_size{,_max}, vfs.zfs.arc_max accordingly. On another machine
>> with vfs.maxbufspace auto-tuned to 7738671104 (~7.2GB), vfs.bufspace
>> is now at 5278597120 (uptime 129 days).
>
>
> The buffer map is a kind of cache.  Like any cache, most of the time it will
> be full.  Don't worry.
>
> Moreover, even when the buffer map is full, the UFS file system is caching
> additional file data in physical memory pages that simply aren't mapped for
> instantaneous access.  Essentially, limiting the size of the buffer map is
> only limiting the amount of modified file data that hasn't been written back
> to disk, not the total amount of cached data.
>
> As long as you're making trivial use of UFS file systems, there really isn't
> a reason to increase the buffer map size.
>
> Alan
>
>

I see. Makes sense now. Thanks!

I forgot to mention that we do have smbfs mounts mounted from another
server, are writes/modifications on files on these mounts also cached
in the buffer map? All non-ZFS file systems right? Input/Output files
are read from or written to these mounts.
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"

Reply via email to