I think the basic application requirement this was designed for was an 
application requiring that the data from N sensors was relatively in sync with 
each other. One application could be tracking movement where you want to see 
the relative movement of the sensors in as close to real time as possible. I am 
sure there are numerous applications that would benefit from this form of 
scheduling; this is just one example. Another requirement would be to discard 
old/stale data: if data cannot be delivered within a certain time that data 
should be dropped in favor of “new” data.

A note: this implementation does not add any form of “QoS”; it is meant as a 
proof of concept to show that you can connect N peripherals to a central and 
guarantee them fixed time slotting using BLE. QoS will be added so that data 
can be discarded if it gets too stale; this is coming in a future revision to 
this code. As was mentioned, it is very preliminary.

> On Mar 21, 2017, at 4:50 PM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Hey Will -
> 
> This sounds pretty cool.  I’m interested: what type of sensor data did you 
> need to have this hard scheduling in Bluetooth for/what were the application 
> requirements you were engineering for?
> 
> Sterling
> 
> On 21 Mar 2017, at 16:21, will sanfilippo wrote:
> 
>> Hello:
>> 
>> Disclaimers:
>> 1) Long email follows.
>> 2) This is a preliminary version of this code. There are hard-coded things 
>> and things I had to hack, mainly because of my ignorance of some areas in 
>> the code. I pushed it so some folks (if they want) can take a look and mess 
>> around before things get cleaned up.
>> 
>> For those interested, a branch was committed today named the “bsnbranch”. 
>> For lack of a better term, I called this the “body sensor network” branch. 
>> This could be quite the misnomer as there is no actual sensor code with this 
>> commit, but I had to come up with a name :-)
>> 
>> The basic idea behind this branch is the following:
>> 
>> * A central wants to connect to N known peripherals.
>> * Each peripheral wants to connect to a known central.
>> * Peripherals generate “application data” at some fixed size and rate (for 
>> the most part). This rate is expected to be pretty fast.
>> * Peripherals and centrals should do their best to maintain these 
>> connections and if a connection is dropped, to re-connect.
>> * The central should allocate fixed time slots to the peripherals and 
>> guarantee those fixed time slots are available.
>> 
>> As with some of the apps in the repo, the initial commit is fairly 
>> hard-coded in some ways. If you look at the source code in main.c in these 
>> apps there are arrays which currently hold some hard-coded addresses: the 
>> public address of the peripheral, the public address of the central, and the 
>> addresses that the central wants to connect to. The application example 
>> shows a central that wants to connect to 5 peripherals. If you want to use 
>> the app without mods in the repo, you need to change BLE_MAX_CONNECTIONS to 
>> 5 when you build your central (in net/nimble/syscfg.yml).
>> 
>> The central application adds the devices in the bsncent_peer_addrs array to 
>> the whitelist and constantly intiates if it is not connected to all of these 
>> devices. The peripheral application does high-duty cycle directed 
>> advertising (constantly!) until it connects to the central. If a connection 
>> is dropped the central and/or peripheral start initiating/advertising until 
>> the connection is re-established. NOTE: there is no delay between the 
>> high-duty cycle advertising attempts currently so beware of that if you are 
>> running your peripheral on a battery!
>> 
>> The central currently uses a hard-coded connection interval of 13 (16.25 
>> msecs). More on this later. The peripheral attempts to send approx an 
>> 80-byte packet at a rate close to this connection interval. That timing is 
>> based on os ticks so it is not perfect, so if folks want more accurate 
>> timing something else would need to be done.
>> 
>> The central also display some basic performance numbers on the console at a 
>> 10-second interval: # of connections, total packets received, total bytes 
>> received, and the pkts/sec and bytes/sec over the last 10 second interval.
>> 
>> While I was testing this setup (5 peripherals, one central) I ran into some 
>> resource issues. I cannot claim to know the host code all that well, but 
>> here are the items that I modified to get this to work. Some of these may 
>> not be necessary since I did not test them in all their various combinations 
>> and some may have no impact at all.
>> 
>> NOTE: these changes are not in the branch btw. They need to be modified by 
>> either changing a syscfg value or hacking the code. I realize hacking the 
>> code is quite undesirable but it was not obvious how to do this with syscfg 
>> and my lack of understanding of the code prevented me from doing something 
>> more elegant. The items in CAPS are syscfg variables. Changing them in your 
>> target is a good way to change there.
>> 
>> 1) Mbufss at the central. I modified the number of mbufs and their size. I 
>> used 24 mbufs with a size of 128. Not sure how many you actually need, but 
>> did not run out of mbufs with this setting.
>>    MSYS_1_BLOCK_COUNT: 24
>>    MSYS_1_BLOCK_SIZE: 128
>> 2) BLE_GATT_MAX_PROCS: I increased this to 8 for the central.
>> 3) BLE_MAX_CONNECTIONS: I made this 5 for the central. NOTE: 32 is the 
>> maximum # of connections supported here. If you use more, the code does not 
>> complain and the behavior will be unpredicatable.
>> 4) I hacked the code to add more the ble_att_svr_entry_pool. I multiplied 
>> the number by 2 (ble_hs_max_attrs * 2).
>> 5) I believe I added 12 to the ble_gatts_clt_cfg_pool but not sure this is 
>> needed.
>> 6) Enabled data length extension by setting BLE_LL_CONN_INIT_MAX_TX_BYTES to 
>> 251. This number could be made less but for now I made it the full size. 
>> This is for both central and peripheral.
>> 
>> SCHEDULER CHANGES:
>> A large part of the changes to the controller involve how connections get 
>> scheduled. There are three configuration items that can be modified, and 
>> need to be modified, for this code to work. I realize I committed this with 
>> some default numbers that probably should be turned off when we merge this 
>> into develop, but for now realize these numbers are based on the connection 
>> interval that the central uses (16.25 msecs) and 5 connections
>> 
>> BLE_LL_STRICT_CONN_SCHEDULING: This is basically a flag that turns on/off 
>> this form of scheduling for the central (boolean value 0 or 1).
>> BLE_LL_ADD_STRICT_SCHED_PERIODS: Adds additional periods to the epoch (see 
>> more below). Default 0
>> BLE_LL_USECS_PER_PERIOD: The number of usecs per period (see more below). 
>> Default 3250
>> 
>> The terminology used above is pretty simple. The central divides time into 
>> epochs. Each epoch is composed of N periods. The number of periods is the 
>> number of connections plus the number of BLE_LL_ADD_STRICT_SCHED_PERIODS. 
>> The connection interval should then be made to equal the epoch length. I 
>> realize that some of this could have been calculated a bit more easily and 
>> with less configuration; those changes will be added soon. Hopefully you see 
>> the basic idea: you want to use a connection interval such that each period 
>> repeats in each epoch. Connections gets assigned to a period and they keep 
>> that period in each epoch. As long as the connection interval is a multiple 
>> of the epoch length, you should be fine. For example, if you want 6 
>> connections and want a 30 msec connection interval, you can make each period 
>> 5 msecs. Realize that once you fill up all the periods you will not be able 
>> to do anything else. Currently, we really do not support advertising on the 
>> central. Well, you can do it, but the scheduler has not been modified to 
>> deal with advertising and your mileage will certainly vary! The 
>> BLE_LL_ADD_STRICT_SCHED_PERIODS is some attempt at reserving some time in 
>> the epoch to do other things. Certainly, you can scan/initiate, but the scan 
>> window/interval is currently not forced to occur on any particular period 
>> boundary, so generally it is expected that your scan window will equal your 
>> scan interval (and thus scanning will occur whenever the device is not 
>> inside a connection event).
>> 
>> I realize that this email glosses over some items and really requires folks 
>> to dive into things a bit to fully understand. I would be happy to answer 
>> questions about the code. I am not quite sure it is truly “ready for prime 
>> time” as there are some items that still need to be dealt with but it should 
>> work reasonably well for now.
>> 
>> Thanks!

Reply via email to