On Thu, Jun 30, 2016 at 01:31:00PM -0700, Simon Ratner wrote:
> It is sometimes hard to tell if they are "expected", but below are a couple
> of sample traces.
> 
> Based on the timestamps, I would say the first two are expected (device was
> slow to respond), while the last one is unexpected. In the last trace, the
> gatt op failed for another reason (rc=6 i.e. BLE_HS_ENOMEM... as an aside,
> any idea which specific resource it may have run out of?)

My guess is that the BLE_HS_ENOMEM is caused by a lack of mbufs.  The
problem is not too few gattc procedures; if it were, the initial call to
ble_disc_svc_by_uuid() would have returned the error instead.

This tells me two things:

1. We need to be more efficient with mbufs.  In particular, nimble
should be able to operate with smaller (and hence more) mbufs.  This is
something we have known about for a while but haven't gotten a chance to
address yet.  At the moment, nimble requires large mbufs (>= 260 bytes).
This obviously has an impact on the size of the mbuf pool your
application can allocate.

2. The GATT layer should not give up so easily when the layer below it
(ATT) fails due to memory exhaustion.  It should just try again a bit
later.

If I recall, your application is scanning most of the time.  I have not
initated many GATT procedures while scanning, and I think this might be
causing the mbuf shortage, so this is something I will be messing with.

In the meantime, you might try the following:

* Increase the number of mbufs allocated to msys, if your platform has
capacity for it.

* Disable logging (perhaps a non-starter, I know).  Logging to the
console is quite slow, and mbufs are being held in queues rather than
being processed while your application is logging.

Reply via email to