On Tue, Jul 19, 2016 at 06:38:02AM -0700, will sanfilippo wrote:
> I am +1 for mbufs. While they do take a bit of getting used to, I
> think converting the host to use them is the way to go, especially if
> they replace a large flat buffer pool.
> 
> I also think we should mention that mbufs use a fair amount of
> overhead. I dont think that applies here as the flat buffer required
> for an ATT buffer is quite large (if I recall). But if folks want to
> use mbufs for other things they should be aware of this.

Good point.  Mbufs impose the following amount of overhead:

* First buffer in chain: 24 bytes (os_mbuf + os_mbuf_pkthdr)
* Subsequent buffers:    16 bytes (os_mbuf)

Also, I didn't really explain it, but there is only one flat buffer in
the host that is being allocated: the ATT rx buffer.  This single buffer
is used for receives of all ATT commands and optionally for application
callbacks to populate with read responses.  The host gets by with a
single buffer because all receives are funneled into one task.  This
buffer is sized according to the largest ATT message the stack is built
to receive, with a spec-imposed cap of 515 bytes [*].  So, switching to
mbufs would probably save some memory here, but the savings wouldn't be
dramatic.

[*] This isn't totally true, as the ATT MTU can be as high as 65535.
However, the maximum size of an attribute is 512 bytes, which limits the
size of the largets message any ATT bearer would send.  The one
exception is the ATT Read Multiple Response, which can contain several
attributes.  However, this command is seldom used, and is not practical
with large attributes.  So, the current NimBLE code does impose an
artificial MTU cap of 515 bytes, but in practice I don't think it would
ever be noticed.

Chris

Reply via email to