On 2013-01-30, Constantine A. Murenin <muren...@gmail.com> wrote:
> Hello misc@,
>
> On OpenBSD 5.2 amd64, I'm storing 1.4GB of source code files and about
> 8x 150MB indices on an mfs partition, plus a gig or two of other
> automatically-generated files.
>
> If I run mount_mfs to load all this stuff from a regular drive, then
> the amount of memory used by mount_mfs(8) is about the same as the
> amount of Used disc space as reported by df(1).
>
> However, if I re-run index generation on an mfs, then after it's all
> done, memory usage by mount_mfs(8) noticeably exceeds Used disc space.
>  As a workaround, I found that it's possible to copy all the files
> over to a new mount_mfs(8) process, after the indices have been
> re-generated, and the new process will at first have a much better
> memory usage, but this seems a little inconvenient and would also
> require a temporary burst of extra RAM to accomplish.
>
> Should I worry that on a 6GB partition that is only 4GB full,
> mount_mfs uses 5GB of memory after about 3GB of data gets mingled?  Is
> mount_mfs swappable?  If I end up being short on memory, would that
> extra 1GB from mount_mfs(8) be swapped out without affecting the
> performance?  Or is there a way to run some kind of garbage collector
> or otherwise improve on an mfs memory use?
>
> % df -hi | fgrep -e Used -e mfs ; mount | fgrep mfs ; ps aux | fgrep
> -e USER -e mfs
> Filesystem     Size    Used   Avail Capacity iused   ifree  %iused  Mounted on
> mfs:18610      5.9G    4.1G    1.5G    73%  439864  357702    55%   /grok/mfs
> mfs:18610 on /grok/mfs type mfs (asynchronous, local, nodev, nosuid,
> size=12582912 512-blocks)
> USER       PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED       TIME COMMAND
> root     18610  0.0 40.2 6291936 5048352 ??  Is    Sun07PM    0:22.56
> /sbin/mount_mfs -o rw -s6G -f2048
>
>
> Cheers,
> Constantine.
>
>

This is expected with mfs, don't set the filesystem to be larger
than the amount of memory you would like it to use.

I don't know how it behaves with swapping.

Reply via email to