I'm trying to find some information on reasonable settings for
debug.max_softdeps on a recent FreeBSD-stable system.

  It seems that if you have a machine that is able to generate disk IO
much faster than can be handled, has a large amount of RAM (and therefore
debug.max_softdeps is large), and the filesystem is very large (about
80GB), filesystem metadata updates can get _very_ far behind.

  For instance, on a test system running 4 instances of postmark
continuously for 24 hours, "df" reports that 40 GB of disk space is being
used, even though only about 5 GB is actually used.  If I kill the
postmark processes, the metadata is eventually dribbled out and "df"
reports 5GB in use.  It takes about 20 minutes for the metadata to be
updated on a completely ideal system.

  On this particular system, it doesn't seem to stabilize either.  If the
4 postmark instances are allowed to run, disk usage seems to climb
indefinitely (at 40GB it was still climbing), until eventually the machine
silently reboots.

  debug.max_softdeps is by default set to 523,712 (1 GB of RAM).  Is that
a resonable value?  I see some tests in the docs with max_softdeps set to
4000 or so.


Tom



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to