On 07.12.2012 10:36, Oleksandr Tymoshenko wrote:

On 2012-11-27, at 1:19 PM, Andre Oppermann <an...@freebsd.org> wrote:

Author: andre
Date: Tue Nov 27 21:19:58 2012
New Revision: 243631
URL: http://svnweb.freebsd.org/changeset/base/243631

Log:
  Base the mbuf related limits on the available physical memory or
  kernel memory, whichever is lower.  The overall mbuf related memory
  limit must be set so that mbufs (and clusters of various sizes)
  can't exhaust physical RAM or KVM.

  The limit is set to half of the physical RAM or KVM (whichever is
  lower) as the baseline.  In any normal scenario we want to leave
  at least half of the physmem/kvm for other kernel functions and
  userspace to prevent it from swapping too easily.  Via a tunable
  kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.

  At the same time divorce maxfiles from maxusers and set maxfiles to
  physpages / 8 with a floor based on maxusers.  This way busy servers
  can make use of the significantly increased mbuf limits with a much
  larger number of open sockets.

  Tidy up ordering in init_param2() and check up on some users of
  those values calculated here.

  Out of the overall mbuf memory limit 2K clusters and 4K (page size)
  clusters to get 1/4 each because these are the most heavily used mbuf
  sizes.  2K clusters are used for MTU 1500 ethernet inbound packets.
  4K clusters are used whenever possible for sends on sockets and thus
  outbound packets.  The larger cluster sizes of 9K and 16K are limited
  to 1/6 of the overall mbuf memory limit.  When jumbo MTU's are used
  these large clusters will end up only on the inbound path.  They are
  not used on outbound, there it's still 4K.  Yes, that will stay that
  way because otherwise we run into lots of complications in the
  stack.  And it really isn't a problem, so don't make a scene.

  Normal mbufs (256B) weren't limited at all previously.  This was
  problematic as there are certain places in the kernel that on
  allocation failure of clusters try to piece together their packet
  from smaller mbufs.

  The mbuf limit is the number of all other mbuf sizes together plus
  some more to allow for standalone mbufs (ACK for example) and to
  send off a copy of a cluster.  Unfortunately there isn't a way to
  set an overall limit for all mbuf memory together as UMA doesn't
  support such a limiting.

  NB: Every cluster also has an mbuf associated with it.

  Two examples on the revised mbuf sizing limits:

  1GB KVM:
   512MB limit for mbufs
   419,430 mbufs
    65,536 2K mbuf clusters
    32,768 4K mbuf clusters
     9,709 9K mbuf clusters
     5,461 16K mbuf clusters

  16GB RAM:
   8GB limit for mbufs
   33,554,432 mbufs
    1,048,576 2K mbuf clusters
      524,288 4K mbuf clusters
      155,344 9K mbuf clusters
       87,381 16K mbuf clusters

  These defaults should be sufficient for even the most demanding
  network loads.

Andre,

these changes along with r243631 break booting ARM kernels on devices with 1Gb 
of memory:

vm_thread_new: kstack allocation failed
panic: kproc_create() failed with 12
KDB: enter: panic

If I manually set amount of memory to 512Mb it boots fine.
If you need help debugging this issue or testing possible fixes, I'll be glad 
to help

What is the kmem layout/setup of ARM?  If it is like i386 then maybe
the parameters VM_MAX_KERNEL_ADDRESS and VM_MIN_KERNEL_ADDRESS are not
correctly set up and the available kmem is assumed to be larger than
it really is.

--
Andre

_______________________________________________
svn-src-head@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to