Re: assert(), faulting traps etc

2015-10-27 Thread will sanfilippo
Regarding assert() and assert_debug(). Where would you put the dir “sys”? Why 
not put the definitions in an os include dir? Is that due to dependencies you 
would not want (i.e. os would need console, or something like that)? Or that 
you dont feel like assert belongs in the os?

About the names: I think someone was mentioning that assert normally gets 
defined out if you define NDEBUG. Is that true? If that is the case, shouldnt 
we keep that paradigm? I know we discussed this a bit but I cant recall what we 
decided :-)

Will


> On Oct 26, 2015, at 5:36 PM, marko kiiskila  wrote:
> 
>> 
>> On Oct 26, 2015, at 5:08 PM, Christopher Collins  wrote:
>> 
>> On Mon, Oct 26, 2015 at 03:35:12PM -0700, marko kiiskila wrote:
>>> Hi,
>> 
>> [...]
>> 
>>> And 2nd (related) topic:
>>> At the moment there is libs/console/full and libs/console/stub.
>>> What kind of mechanism should we have for picking between
>>> these two implementation? What I’d need is a way for a builder
>>> of project to pick between these, and all eggs in the project
>>> to have include the header file for the right implementation.
>>> 
>>> But I also want to be able to specify within egg definition that
>>> the egg will make calls to console_printf(). Egg itself does not
>>> care which one of the implementations gets used.
>>> 
>>> Is there a way to do this kind of thing yet?
>> 
>> The newt tool supports the concept of "capabilities" which can address
>> this requirement.  
> ...
>> All that said, the capabilities feature isn't actually fully implemented
>> yet; the above is just how I might expect it to work once it is done :).
>> 
>> Here are a few more questions that I think need to be answered:
>> 
>>   * How does the "peripheral" egg include the appropriate console
>> header files?  Even if both console eggs have identically named
>> header files, newt needs to arrange for the appropriate include
>> directory to be passed to the compiler.
>> 
>>   * What is an easy way to switch between the debug and full
>> implementations.  The user should not need to modify the
>> myproject.yml file to do this.  Perhaps the "identities" feature
>> can be used here.
>> 
> 
> Thanks, this sounds like a thing that matches my requirement.
> I’ll have to start filling in these blanks in newt tool then.
> 
> I think I’ll treat a dependency that comes in a form of capability
> requirement the same way as I’d treat normal dependency:
> once an egg is found which implements the capability, it’s include
> path is passed to compiler.
> 
> I won’t address the identity thing with this work, but that sounds
> useful. Depending on whether you’re building a debug version of your
> project or not, you could pick whether you get console output.



Re: Task Priorities

2015-11-04 Thread will sanfilippo
I dont have an answer but I would like to add something using the BLE stack as 
an example. The BLE stack wants to create tasks or a task and it wants the 
highest priority task. How do we let people know that the BLE stack must have 
the highest priority task? Through documentation? If we let people define task 
priorities there is a risk that they give the BLE task an incorrect priority. 
This would be an issue with either approach btw.

Anyway, I dont have a strong opinion either way. There are things I like and 
dislike about both approaches although I slight lean towards #1 as I am more 
familiar with it and it is a common way of doing things. Neither of which is 
sufficient justification for going with #1, of course :-).

Will

> On Nov 4, 2015, at 11:43 AM, Sterling Hughes  wrote:
> 
> Howdy,
> 
> I'm working on getting blinky a bit more developed as an initial
> project.  First part of that is getting console running on sim, which
> I have working.
> 
> One thing I've been looking at, is the hal_uart starts an OS task when
> initialized.  The priority of that task, and stack size is defined by
> the MCU definition (e.g. hw/mcu/native, or hw/stm, etc.)
> 
> I think task priorities are something that should be defined on a
> per-project level.  From what I can see, there are two options for
> doing this:
> 
> 1- Have the individual packages expect a #define from the project, in
> the following format:
> 
> hal_uart.c:
> 
>  os_task_init(OS_PRIO_UART, ..)
> 
> project/blinky/include/project/os_cfg.h:
>   #define OS_PRIO_UART (10)
> 
> This could be enforced using our capabilities concept, where the
> package would req_capability: os_cfg, and the project would provide
> that.
> 
> 2- The init function could take the priority to start the task at.
> So, when hal_uart_init_cbs() is called, it would take two additional
> arguments the priority, and stack size.  Anything that creates a task,
> would be called directly from the project, with these arguments.
> (uart code needs a little refactoring to make this easy, but should be
> fine.)
> 
> I'm leaning slightly towards option #2, as I don't like messing with
> defines.  That said, #1 is much more common, and they way that other
> operating systems do it (I think, so that all priorities are defined
> in a single header file, and you don't have to look through the code
> to find them.)
> 
> What do folks think?
> 
> Sterling



Re: subsystem configuration ideas

2016-01-04 Thread will sanfilippo
I am fine with the naming and the interface and all that. Not so sure about lua 
for config though. Seems like a heavyweight thing for config so I am glad you 
are considering something simpler :-)

Will

> On Jan 4, 2016, at 10:58 AM, marko kiiskila  wrote:
> 
> Hi,
> 
> so we need to have a way to set/read settings for subsystems.
> These are the ones to be adjusted at runtime.
> 
> What I’m thinking is to build this in a way where names of
> these are strings, and that you should be able to have hierarchical
> naming. E.g. to have subsystem to be part of the name.
> 
> subsystem1/variable1 = value
> subsystem1/variable2 = another_value
> subsystem2/var1 = yet_another_value
> 
> I’d rather use strings as identifiers as opposed to say, enumerated
> values, because it would be easier to keep them unique.
> 
> As for setting/reading them, I was going to start with a CLI interface.
> And have interface from newtmgr as well.
> 
> Of course, we will need to persist configuration. So there’s few
> options here. Either use lua scripts, which would be read at
> bootup time and they can change these settings. And/or
> a simpler script interface for cases when lua is not present.
> 
> Let me know if you have comments on this,
> M
> 
> 



Re: [VOTE] Release Apache Mynewt 0.9.0-incubating-rc3

2016-06-02 Thread will sanfilippo
[X] +1 Release this package
[ ]  0 I don't feel strongly about it, but don't object
[ ] -1 Do not release this package because...


Will



Re: Running Newt on nrf51 boards with 0.9.0 and 0-dev

2016-06-03 Thread will sanfilippo
Stephane and Wayne:

As an FYI, the commands that you list do the following:

w4 4001e504 2 -> This is the NVMC CONFIG register. Writing a 2 enables erase.
w4 4001e50c 1 -> This is the NVMC ERASEALL register. This will erase the UICR 
and the entire flash.

Note that there is also a MPU on this device that can be used to enable other 
protections. On reset, the MPU registers are “disabled”, meaning that the 
protections provided are not enabled but if some bootloader runs that writes to 
these registers you might have issues trying to erase portions of the flash. I 
mention this only as a FYI in case you run into difficulties erasing the device.

Note that we will be adding a microbit BSP soon; definitely on the radar! Not 
that that helps you now...

Let me know if you run into other difficulties Wayne.

Will

> On Jun 3, 2016, at 1:15 AM, Stephane D'Alu <sd...@sdalu.com> wrote:
> 
> On 06/03/2016 07:32 AM, Wayne Keenan wrote:
>> Hi Will,
>> 
>> Ok, thanks for letting me know.  Boards 2&3 are 'hidden' behind the mbed
>> CMSIS-DAP interface and I also didn't want to resort to using additional
>> h/w (a non-dev user wouldn't have) in order to erase or program them; prior
>> to programming board 1 am using:
>> 
>> JLinkExe -device nrf51 -if swd -speed 4000
>> 
>> erase
>> 
>> q
> 
> That will remove protections and erase
> (don't remember where I got it)
> 
> w4 4001e504 2
> w4 4001e50c 1
> sleep 100
> erase
> 
> 
>> 
>> 
>> .
>> All the best
>> Wayne
>> 
>> On 2 June 2016 at 23:12, will sanfilippo <wi...@runtime.io> wrote:
>> 
>>> Hey Wayne:
>>> 
>>> We dont “officially” support the boards you mention as they are not in the
>>> supported BSPs. If you have an “official" nrf51dk that would be the best to
>>> get started on as we do support that currently.
>>> 
>>> Unfortunately, bletiny is a bit of a misnomer; it is not so tiny, and
>>> depending on which version you are trying to build it may be too large to
>>> fit in our current image slot; bleprph should work though. We may have a
>>> work-around for nrf51 bletiny soon but for now I would use bleprph.
>>> 
>>> BTW, are you sure you have erased the devices you are trying to load the
>>> code on? There are protection mechanisms that you must disable in order for
>>> our newt tool to be able to erase/program flash.
>>> 
>>> 
>>> 
>>>> On Jun 2, 2016, at 2:04 PM, Wayne Keenan <wayne.kee...@gmail.com> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I've been trying to get the bletiny and bleprph apps running on a few
>>> types
>>>> of nrf51 boards but not having much luck.
>>>> 
>>>> The 3 types are:
>>>> 
>>>> 1. PCA1 -  QFAA G0  (The stubby little USB dingle with the built-in
>>>> Jsegger J-link)
>>>> 2. PCA10024 -  QFAA G0 (The mbed enabled board, using a hex file )
>>>> 3. BBC:Microbit  -   QFAA M0
>>>> 
>>>> I've tried with bsp set to 'nrf51dk-16kbram' and 'nrf51dk'
>>>> I am uploading the Newt boot loader app too.
>>>> 
>>>> 
>>>> In order to upload to #2 & #3 (as they appear as USB drives) I've tried
>>>> converting the elf binaries to a combined hex file in two different ways:
>>>> 
>>>> A)
>>>> 
>>>> arm-none-eabi-objcopy -O ihex bin/bletiny/apps/bletiny.elf app.hex
>>>> arm-none-eabi-objcopy -O ihex bin/nrf51_boot/apps/boot/boot.elf boot.hex
>>>> 
>>>> mergehex -m app.hex boot.hex  -o microbit_firmware.hex
>>>> 
>>>> B)
>>>> 
>>>> srec_cat boot.hex -intel app.hex -intel  -o  combined.hex -intel
>>>> 
>>>> 
>>>> I'm pretty sure for #3 that it's possible to flash the entire address
>>> range
>>>> of the nrf51  as the micro:bit's default firmware download from the web
>>> is
>>>> a > 500k hex file; which without lifting the hood it implies to me it has
>>>> the SoftDevice, App and Bootloader.
>>>> 
>>>> I'm not having much luck, I probably need to add some load/start address
>>>> info during objcopy and/or some address altering flags using srec_cat (?)
>>>> 
>>>> Are these boards and the methods currently supported?   I guess they
>>> should
>>>> be but I've buttered fingered something somewhere.
>>>> 
>>>> Or perhaps should I be using the 'official' nrf51dk ?
>>>> 
>>>> 
>>>> All the best
>>>> Wayne
>>> 
>>> 
>> 
> 
> 
> -- 
> Stephane D'Alu



Re: [VOTE] Release Apache Mynewt 0.9.0-incubating-rc2

2016-05-28 Thread will sanfilippo
> [X] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because...

+1 (binding)

Will


Re: GDB scripts

2016-06-01 Thread will sanfilippo
I think having a gdbscript or scripts is an excellent idea but I dont think I 
would put them in libs/util. I dont like saying that without offering an 
alternative… but I just did :-)


> On Jun 1, 2016, at 10:25 AM, Christopher Collins  wrote:
> 
> On Wed, Jun 01, 2016 at 10:04:05AM -0700, Vipul Rahane wrote:
>> Hello,
>> 
>> While debugging a bunch of things, I felt the need for a few gdb scripts to 
>> print out the data structures that we would use regularly. I was thinking of 
>> creating a common gdb script which would contain different functions. This 
>> could reside in "libs/util" since we do not have a “tools" directory.
>> 
>> The developer would then just source the common script.
>> 
>> All suggestions are welcome.
> 
> Some gdb scripts are sorely needed, and I think that is a great idea.  I
> wrote some small gdb macros ("functions"?) for inspecting mbufs; if you
> create one or more gdb scripts, I'll follow your lead and add mine.
> 
> Chris



Re: Enabling the Nordic HAL gpio external IRQ detection

2016-06-01 Thread will sanfilippo
It is on our radar to port all the current HAL to the nordic chip and this 
would include generating an interrupt on a gpio level change. Not quite sure 
when that will occur but it near the top of the list to do.


> On Jun 1, 2016, at 1:24 AM, Wayne Keenan  wrote:
> 
> Hi,
> 
> I'd like to detect button pressed via IRQ based edge detection because 
> polling gpio pins is currently not a practical solution for my 'app'.
> 
> I could just remove  the #if defines in the hal .c and just try it but those 
> are obviously there for a 'bigger picture' reason.  
> 
> I couldn't find an open issue on JIRA so wondered if anyone could please let 
> me know a bit more and if this is on anyones radar?
> 
> All the best
> Wayne



Re: Can't connect while in discovery mode

2016-06-21 Thread will sanfilippo
Hey all:

Sorry that I have not followed the entire thread so if I repeat something that 
has already been discusssed I apologize.

I took a look at the controller HCI command processing and there is something 
that I had forgotten about how it works. There are two “objects” associated 
with sending back an event to the host: a buffer to hold the event data and an 
os event so we can post the event to the task that handles incoming events. I 
told Chris that the controller re-uses the command buffer to send back Command 
Complete/Command Status and that is true. What I had forgotten is that the 
controller frees the os event that is associated with the command and then 
attempts to grab one to send the command complete/status back. This can fail, 
and if it does, no command complete/status is sent.

Regarding resource usage: the controller requires a buffer from g_hci_cmd_pool 
to send events to the host. The transport layer requires an os event to enqueue 
the event from the controller to the host. There are no restrictions (from the 
controller perspective) when it comes to allocating a command buffer from the 
command pool. What this means is that the controller can drain the entire 
command pool sending asynchronous events to the host. If the host does not 
process these events, no commands can be sent from the host to the controller 
and no more events can be sent from the controller to the host. One “fast and 
easy” way for the controller to drain the entire command pool would be through 
advertising reports. The controller only places one advertisement per report 
and if many reports are being received quickly and the host does not process 
them quickly the command pool will get drained.

Regarding commands from the host to the controller: the LL task is the highest 
priority task and the controller should, very quickly, process the command and 
send back a command complete. I would be very, very surprised if the controller 
is busy doing something else and cannot process commands fast enough such that 
commands queue up at the controller although it is possible.

Not sure if this fully answers your question but I hope it provides some 
additional insights.


> On Jun 21, 2016, at 12:46 AM, Simon Ratner  wrote:
> 
> I will need to switch over to the nrf52dk to debug, I'll see if I can do
> that in the next couple of days.
> 
> In the meantime, I tried increasing hci buffers from 4 to 8 and it has
> helped somewhat. I am not seeing -1 returns any more, and incoming
> connections are less flaky, but I still see that 14. To clarify a point I
> missed in my original email - and your question has surfaced - that isn't a
> direct return from ble_gap_conn_initiate, but the status code in
> BLE_GAP_EVENT_CONNECT (ctxt->connect.status) following said call. That is
> why I wasn't sure if it is actually a BLE_HS_ETIMEOUT from the host, or
> coming from another part of the stack entirely.
> 
> Just returning to the question of hci buffers briefly, it would help me to
> really understand the resource requirements if you could very briefly
> describe the sorts of things which consume these buffers, and for how long
> they are tied up before being released back into the pool? What does
> "max_hci_bufs=4" mean in practical terms? Same goes for all other
> resources, I guess. As a tangential example, while playing with
> multi-connections, it was not obvious until I read the code that in
> addition to a connection descriptor, I also needed 3 available link
> channels to be able to accept a new connection. I am still not sure why you
> might ever set the number of channels to anything other than 3x connections.
> 
> 
> On Jun 20, 2016 7:13 PM, "chris collins"  wrote:
> 
> (Btw, sorry if these emails "look annoying"... my main computer is out of
> commission, so I have been using the gmail web interface for the last few
> days!)
> 
> There is no connection between the mbuf settings and the max_hci_bufs
> setting.  I don't have a specific max_hci_buf setting in mind, but 4 or 5
> seems reasonable, but I am not so enthusiastic about this change anymore.
> I am pretty sure my theory of what was causing the BLE_HS_ETIMEOUT error is
> incorrect, for the following reasons:
> 
> 1. I was discussing this with Will, and he reminded me that the controller
> always reuses the command HCI buf when it sends an acknowledgement.  In
> other words, the controller should never fail to allocate an HCI buf when
> sending an acknowledgement to the host.
> 
> 2. The host code *doesn't* return BLE_HS_ETIMEOUT when an acknowledgment is
> not received; it returns -1 (another return code bug!).  I simply don't see
> any code path which would yield a return code of 14 here.  I hate to ask,
> but... are you sure you the 14 is coming from ble_gap_conn_initiate()?
> 
> I am fairly confident the -1 return code from ble_gap_disc_cancel() is
> indeed caused by a hci buffer shortage, but I have a feeling there is 

Re: split image to increase application storage space.

2016-06-21 Thread will sanfilippo
Yep; that clarifies things for me. Thanks!

> On Jun 21, 2016, at 2:09 PM, p...@wrada.com wrote:
> 
> 
> The example was simplified to give folks the general idea.
> 
> Practically, the loader=AIIC code would include whatever someone wanted it
> to include and it would compile and link as a stand alone app; in this
> case I used Bluetooth stack and firmware upgrade as an example.  The AIDC
> would contain the stuff that is NOT in the AIIC.
> 
> So to give a more complex case, suppose the AIIC used 60% of the bluetooth
> stack, that¹s all that would be included in there.  If the application
> used 100% of the bluetooth stack, the remainder would be in the AIDC.
> 
> This is a consequence of linking the AIIC as a stand-alone app, and then
> linking the AIDC using all the symbols present in the AIIC.
> 
> Does that help or make it more confusing?
> 
> Code would not be upgraded independently, as the AIDC is hard-linked to
> fixed symbol addresses in the AIIC.  While it might be possible to create
> a new AIDC that used the same symbols from the AIIC, I was not intending
> to allow that.
> 
> Paul
> 
> On 6/21/16, 1:54 PM, "will sanfilippo" <wi...@runtime.io> wrote:
> 
>> +1 on idea 1.
>> 
>> Just a point of clarification though: why do you specfically break out
>> (in your example) the bluetooth stack size and the upgrade code size? I
>> think I was originally thinking about things slightly differently here in
>> that the bluetooth stack and upgrade image were combined into one ³image²
>> (they would not be separate) but that does not appear to be the case
>> here. Well, not sure actually. Can you upgrade the upgrade code
>> independently? Sorry if this is obvious and I am not quite getting it :-)
>> 
>>> On Jun 21, 2016, at 12:31 PM, p...@wrada.com wrote:
>>> 
>>> I¹m working on split image feature and I think I just have one more
>>> major design issue to consider, and that is newt build related.
>>> 
>>> First, a summary for folks who are unaware of this effort.  The goal is
>>> to create an application in two pieces to fit into two image banks such
>>> that one piece would contain the bluetooth stack and firmware update
>>> application and the other would contain the primary customer application
>>> (that¹s the goal, but it would be defined generally to allow any split).
>>> These two are linked together with a special property that the upgrade
>>> app could run without the primary customer app but not vice versa.  To
>>> give these names I call the independent one the AIIC (Application
>>> independent image component) and the customer app the ADIC (Application
>>> Dependent Image component).  This would allow the following upgrade
>>> procedure with two flash images. At each step, there would always be a
>>> valid upgrade image loaded into the unit.
>>> 
>>> 1.  Erase the application image from the second image bank (still can
>>> recover since the upgrade image is valid)
>>> 2.  Upload a new upgrade image to the second image bank (primary
>>> upgrade image still intact in case secondary fails)
>>> 3.  Swap the upgrade images in the 1st and 2nd bank (if secondary
>>> doesn¹t boot, we can always revert)
>>> 4.  Load the new application into the 2nd bank (if this fails, we
>>> still have the upgraded in the primary)
>>> 5.  Complete
>>> 
>>> This model allows a safe upgrade (because the upgrade image can always
>>> upgrade) while preserving more space for the application because the
>>> application doesn¹t have to duplicate the bluetooth stack and associated
>>> upgrade code.
>>> 
>>> Consider this example, two 112k flash banks which we want to store a
>>> bluetooth application. Assume the following code sizes: Application size
>>> 32k,  Bluetooth stack size 64k, Upgrade code size 16k.  With two
>>> independent app images, each would be 112k filling the available
>>> sectors.   However, if we split the image we would have an upgrade image
>>> of  80k and an app image of 32k (since it uses bluetooth and upgrade
>>> from the AIIC) leaving tons more space for sophisticated applications
>>> and more space for the upgrade as well.
>>> 
>>> We decided to create this split image as a pair that are linked
>>> together during the build process.  There will be no dynamic bindings
>>> like SWI or function table or anything like that.  The goal is not to
>>> separate the OS/stack from the app, but just to be a bit more efficient
>>> abo

Re: Can't connect while in discovery mode

2016-06-20 Thread will sanfilippo
And btw, just because the controller is supposed to act this way doesnt mean 
there isnt a bug where something is going wrong. I will take a look over the 
code to see if there is a way to orphan command buffers when replying to a 
command (with command complete or command status).


> On Jun 20, 2016, at 7:13 PM, chris collins  wrote:
> 
> (Btw, sorry if these emails "look annoying"... my main computer is out of
> commission, so I have been using the gmail web interface for the last few
> days!)
> 
> There is no connection between the mbuf settings and the max_hci_bufs
> setting.  I don't have a specific max_hci_buf setting in mind, but 4 or 5
> seems reasonable, but I am not so enthusiastic about this change anymore.
> I am pretty sure my theory of what was causing the BLE_HS_ETIMEOUT error is
> incorrect, for the following reasons:
> 
> 1. I was discussing this with Will, and he reminded me that the controller
> always reuses the command HCI buf when it sends an acknowledgement.  In
> other words, the controller should never fail to allocate an HCI buf when
> sending an acknowledgement to the host.
> 
> 2. The host code *doesn't* return BLE_HS_ETIMEOUT when an acknowledgment is
> not received; it returns -1 (another return code bug!).  I simply don't see
> any code path which would yield a return code of 14 here.  I hate to ask,
> but... are you sure you the 14 is coming from ble_gap_conn_initiate()?
> 
> I am fairly confident the -1 return code from ble_gap_disc_cancel() is
> indeed caused by a hci buffer shortage, but I have a feeling there is some
> sort of bug at the root of these issues.  Are you able to debug your
> application in gdb?  I am curious about the state of the nimble stack when
> you receive the -1 or 14 error codes.  In particular:
> 
> # Print state of HCI buffer pool:
> p g_hci_os_event_pool
> 
> # Print GAP master and slave states:
> p ble_gap_master
> p ble_gap_slave
> 
> If you could capture that information that would much appreciated.
> 
> Finally, to answer a lingering question that I seem to have consistently
> ignored: there should not be any issue with timing.  After the call to
> ble_gap_disc_cancel() returns, you can immediately perform another GAP
> procedure.
> 
> Chris
> 
> On Mon, Jun 20, 2016 at 6:08 PM, Simon Ratner  wrote:
> 
>> Ok, so those two sound like they might be have the same cause. Perhaps
>> related to that, I also stop receiving incoming connections after a short
>> while, possibly for the same reason, although there is no indication in the
>> logs or anywhere else on the mynewt side - the connecting central justsees
>> a failed connection.
>> 
>> I am able to process all the advertisement reports just fine when I don't
>> attempt to cancel discovery / connect to those discovered peripherals. Is
>> it possible that cancellation is somehow causing or exacerbating this; for
>> example some reports have already been received but are still being handled
>> by the stack at the time discovery is cancelled, they are never reported to
>> the app and corresponding buffers never freed? Just guessing here.
>> 
>> I'll try increasing hci buffers, too. Do you have a recommended value for
>> max_hci_buf? What about the mbuf size passed to ble_ll - is it at all
>> correlated with host bufs, should they be allocated in certain ratios?
>> 
>> 
>> 
>> On Mon, Jun 20, 2016 at 5:55 PM, chris collins  wrote:
>> 
>>> Hi Simon,
>>> 
>>> Unfortunately I am not able to reproduce that behavior.  However, I
>> think I
>>> can answer one of your questions.  Hopefully that will lead to a full
>>> solution.
>>> 
>>> That -1 return code is generated when the stack runs out of HCI command /
>>> event buffers.  The actual return code is a bug; BLE_HS_ENOMEM should
>>> probably be returned instead.  I am a bit puzzled about the cause of the
>>> buffer shortage.  You are probably receiving a lot of advertisement
>> reports
>>> from the controller, but I wouldn't expect them to be coming in faster
>> than
>>> you can handle them, but I suppose that depends on the particulars of
>> your
>>> application.  You can try increasing the number of HCI buffers at host
>>> initializtion time.  This setting is in the host configuration struct,
>> and
>>> it is called max_hci_bufs.
>>> 
>>> Regarding the second problem (ble_gap_conn_initiate() returns
>>> BLE_HS_ETIMEOUT): I have a guess.  The return code indicates that the
>>> controller did not respond to an HCI command in a timely manner.  My
>> guess
>>> is that the controller is unable to allocate an HCI buffer due to the
>>> shortage.  From looking at the code, it appears we don't have any
>>> statistics indicating the number of times an HCI buffer failed to
>>> allocate... this is definitely something that should be added.
>>> 
>>> Chris
>>> 
>>> On Mon, Jun 20, 2016 at 5:07 PM, Simon Ratner  wrote:
>>> 
 Thanks Chris, just tried it out and it seems to do the trick -- half 

HAL cputime and low power RTC

2016-06-23 Thread will sanfilippo
Hello:

I wanted to post a question to the dev list to see if folks had opinions 
regarding the following topic. As others have stated “this will be a long and 
dry email” so be forewarned…

HAL cputime was developed to provide application developers access to a 
generic, high resolution timer. The API provided by the hal allows developers 
to create “timers” that can be added to a timer queue. The API also provides a 
set of routines to convert “normal” time units to hw timer “ticks”. The timer 
queue is used to provide applications with a callback that will occur at a 
given ‘cputime’. The term ‘cputime’ refers to the underlying timebase that is 
kept by the hal. Cputime always counts in tick increments, with the time per 
tick dependent on the underlying HW timer resolution/configuration.

The main impetus behind creating this HAL was for use in networking stacks. BLE 
(bluetooth low energy) is a good example of such a stack. The specification 
requires actions to occur at particular times and many of these actions are 
relatlive to the transmission or reception time of a packet. The cputime HAL 
provides a consistent timebase for the BLE controller stack to interface to the 
underlying HW and should provide a handy abstraction when porting to various 
BLE transceivers/socs.

Using the current nimBLE stack (mynewt’s BLE stack) as example, the stack 
instantiates cputime using a 1 MHz clock. This means that each cputime tick is 
1 usec. This timebase was chosen as it provides enough (more than enough!) 
resolution for the BLE stack and is in a time unit that is a common factor of 
any time interval used in the specification. For example, advertising events 
are in units of 625 usecs and connection intervals are in units of 1250 usecs.

While using a 1 usec timebase has its advantages, there are disadvantages as 
well. The main drawback is that on some HW this timebase would require use of a 
higher power timer. For example, the nrf52 has a low power timer (they call it 
the RTC) but this timer has a minimum resolution of 30.517 usecs as it is based 
on a 32.768kHz crystal. In its current incarnation, hal cputime cannot support 
this timer as the minimum clock frequency accepted by this hal is 1 MHz.

So, this (finally!) leads to the question I want to ask the community: how does 
the community feel about sacrificing “genericness” for “efficiency”? If it were 
up to me, I would sacrifice genericness for efficiency in a microsecond 
(forgive the bad pun!) in this case. Let me go into a bit more detail here. It 
should be obvious to the reader that there are neat tricks you can play when 
dividing by a power of 2 (it is a simple shift right). In the case of a 32.768 
kHz crystal, each tick is 1/32768 seconds in length (this is where we get the 
~30.517 usec tick interval). What I would like to do is have a compile time 
definition specifying use of a 32.768 kHz crystal for cputime. How this gets 
defined is outside the scope of this email. It may be a target variable, 
something in a pkg.yml file or a newt feature. With this definition the API 
that converts ticks to usecs (and vice versa) does a shift instead of a divide 
or multiply. On the nrf51 this can lead to quite a large savings in time. Using 
the C library 64-bit divide routine that mynewt uses, it takes about 60 usecs 
to perform this divide. When we shift a 64-bit number to perform the divide 
this time gets down to 4 or 5 usecs (slightly more than an order of magnitude 
savings!). Of course, on faster processors or processors that support faster 
divides this might be a moot point, but for those using the nrf51 it is not.

Now you may say “you could have done the same thing in your current HAL cputime 
with a 1 MHz clock”. In this case, the routine to “convert” ticks to usecs (and 
vice versa) would simply return the number passed in. I would like to make this 
change as well personally. Seems quite a big win (and would also save some code 
space too!).

Comments?

Will

 

Re: More Serial Issues

2016-06-27 Thread will sanfilippo
David:

I am not sure if this is the issue but it could be you are using the nrf52pdk 
bsp as opposed to the nrf52dk bsp. The PCA version on top of the board will 
tell you which one you should be using. The pdk is 10036 and the dk is 10040. 
If you turn the board over you will see that pins 5,6,7,8 and RTS, TXD, CTS and 
RXD (respectively). Of course, you gotta flip ‘em when you connect your serial 
cable to them but I am sure you know that…

Anyway, let me know if that solves your problem (or not).


> On Jun 27, 2016, at 9:34 AM, David G. Simmons  wrote:
> 
> Me again …
> 
> So I gave up on serial to the STM32F3Discovery board (for now) and moved on 
> to the NRF52 board that I just got. 
> 
> Started working through the tutorial 
> http://mynewt.apache.org/os/tutorials/bletiny_project/ 
>  and, of course, 
> found some issues. :-) I can load the image on the board, and it appears to 
> be running (can’t see the BLE device using a BLE stumbler, but that’s another 
> issue I guess) and … still no serial access. 
> 
> From what I know from the board specs, pin PA.08 is RX and pin PA.06 is TX, 
> so I hooked those up to my AdaFruit FT232H board, which should give me 
> USB-Serial interface and … once again, nothing. Hmmm … could it be the FT232H 
> itself? Hooked the PA.08 and PA.06 pins to my scope and I would expect to see 
> activity on the PA.06 pin but nope! Flat-lined. 
> 
> So I’d love to be told the error of my ways here … 
> 
> dg
> --
> David G. Simmons
> (919) 534-5099
> Web  • Blog  • 
> Linkedin  • Twitter 
>  • GitHub 
> /** Message digitally signed for security and authenticity.  
> * If you cannot read the PGP.sig attachment, please go to 
>  * http://www.gnupg.com/  Secure your email!!!
>  * Public key available at keyserver.pgp.com 
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



Re: More Serial Issues

2016-06-27 Thread will sanfilippo
By default we do not enable flow control so those lines shouldnt do anything.

As Kevin states, you hook up RTS to CTS, CTS to RTS, TXD to RXD and RXD to TXD 
(if that makes sense). Hopefully the cabling you are using is labeled so it is 
easy to determine which wires are what (txd from computer goes to rxd on board, 
etc, etc).

When you do newt debug, is the code running? Do you see the variable g_os_time 
being incremented? That will tell you if the os is running and if it is I 
suspect the rest is working as well. You should see characters being spit out 
the TXD line (TXD from the nrf52) when bletiny boots up as we dump information 
to the console on startup of bletiny. The serial port should be set for 115200 
baud, 1 stop bit, 8 data bits and no parity (if I remember all that terminology 
correctly).

I presume you have built and downloaded a bootloader to this board. While some 
folks use newt run, I generally dont (dont ask me why; just my own thing). The 
commands to use would be (assuming your target is named ‘tgt’):
newt build tgt
newt create-image tgt 0.0.0
newt load tgt
newt debug tgt

After you do the newt debug tgt you should see the gdb prompt. At this point 
you do this:
monitor reset
c

Let it run for a bit them stop it in the debugger and do this: p/d g_os_time. 
That should return some non-zero value. If you continue on and then stop again 
and do p/d g_os_time, it should have incremented. Each os time tick is 1 
millisecond.

Another possible issue here is the log level that is being used although I 
suspect you have not messed with that.

Let me know if things are still not working after all this.


> On Jun 27, 2016, at 10:16 AM, David G. Simmons  wrote:
> 
> 
>> On Jun 27, 2016, at 1:08 PM, Kevin Townsend  wrote:
>> 
>> 
>> On 27/06/16 19:05, David G. Simmons wrote:
>>> Will,
>>> 
>>> Thanks! One issue was certainly that I was using the PDK instead of the DK, 
>>> I’ll add that to the documentation as there was no mention of this. And 
>>> while I have been known, in the past, to get TX and RX backwards, that is 
>>> not the case here.
>>> 
>>> That being said, I’m still getting nothing, even on the scope where, no 
>>> matter what was hooked to what, I would still expect to see the Tx pin 
>>> toggling as data is written to it. Still flat-lined. I did try hooking up 
>>> DTS and CTS — even though they’re not mentioned in the tutorial — in hopes 
>>> that I might get some signs of life out of it that way, but still no joy.
>> 
>> CTS on one side goes to RTS on the other side, and similar for RTS which 
>> goes to CTS. Try switching the two lines? You may also need to enable HW 
>> flow control in your terminal emulator depending on what you are using.
>> 
> 
> Yup, got that too. The interesting bit is that both the Tx and Rx pins coming 
> off of the NRF52 are flat-lined on the scope, which indicates that the board 
> is unable to, or unwilling to, properly use those pins for some reason.
> 
> dg
> --
> David G. Simmons
> (919) 534-5099
> Web • Blog • Linkedin • Twitter • GitHub
> /** Message digitally signed for security and authenticity.
> * If you cannot read the PGP.sig attachment, please go to
> * http://www.gnupg.com/ Secure your email!!!
> * Public key available at keyserver.pgp.com
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



Re: More Serial Issues

2016-06-27 Thread will sanfilippo
Thanks for chiming in Marko and sorry I didnt respond sooner David. For some 
reason I didnt see any of the latter emails; I was not ignoring you. Well, 
technically, I guess I was, but it was not intentional :-)

> On Jun 27, 2016, at 1:15 PM, David G. Simmons <santa...@mac.com> wrote:
> 
> 
>> On Jun 27, 2016, at 3:52 PM, marko kiiskila <ma...@runtime.io> wrote:
>> 
>> 
>>> On Jun 27, 2016, at 12:19 PM, David G. Simmons <santa...@mac.com> wrote:
>>> 
>>>> 
>>>> On Jun 27, 2016, at 1:30 PM, will sanfilippo <wi...@runtime.io 
>>>> <mailto:wi...@runtime.io>> wrote:
>>>> 
>>>> By default we do not enable flow control so those lines shouldnt do 
>>>> anything.
>>>> 
>>>> As Kevin states, you hook up RTS to CTS, CTS to RTS, TXD to RXD and RXD to 
>>>> TXD (if that makes sense). Hopefully the cabling you are using is labeled 
>>>> so it is easy to determine which wires are what (txd from computer goes to 
>>>> rxd on board, etc, etc).
>>> 
>>> Right, all that’s hooked up correctly.
>>> 
>>>> 
>>>> When you do newt debug, is the code running? Do you see the variable 
>>>> g_os_time being incremented? That will tell you if the os is running and 
>>>> if it is I suspect the rest is working as well. You should see characters 
>>>> being spit out the TXD line (TXD from the nrf52) when bletiny boots up as 
>>>> we dump information to the console on startup of bletiny. The serial port 
>>>> should be set for 115200 baud, 1 stop bit, 8 data bits and no parity (if I 
>>>> remember all that terminology correctly).
>>>> 
>>>> I presume you have built and downloaded a bootloader to this board. While 
>>>> some folks use newt run, I generally dont (dont ask me why; just my own 
>>>> thing). The commands to use would be (assuming your target is named ‘tgt’):
>>>> newt build tgt
>>>> newt create-image tgt 0.0.0
>>>> newt load tgt
>>>> newt debug tgt
>>>> 
>>>> After you do the newt debug tgt you should see the gdb prompt. At this 
>>>> point you do this:
>>>> monitor reset
>>>> c
>>>> 
>>>> Let it run for a bit them stop it in the debugger and do this: p/d 
>>>> g_os_time. That should return some non-zero value. If you continue on and 
>>>> then stop again and do p/d g_os_time, it should have incremented. Each os 
>>>> time tick is 1 millisecond.
>>>> 
>>>> Another possible issue here is the log level that is being used although I 
>>>> suspect you have not messed with that.
>>>> 
>>>> Let me know if things are still not working after all this.
>>> 
>>> So just to be completely sure I did the following:
>>> 
>>> 1) newt clean nrf52_boot
>>> 2) newt clean myble
>>> 3) newt build nrf52_boot
>>> 4) newt build myble
>>> 5) newt create-image myble 1.0.1 (I even incremented the version!)
>>> 6) newt load myble
>>> 7) newt debug myble
>>> 
>> 
>> newt load nrf52_boot?
> 
> Well, that righ there was the ticket, and it seems to be a missed step in the 
> tutorial. As soon as I added that, it all began to work. Serial over the 
> FT232H is live, etc. So I guess I’ll add that to the Tutorial page as well. 
> and maybe some of the debug information as that was also a helpful diagnostic.
> 
> dg
> 
>> 
>> And in the debugger, right after attaching, try ‘mon reset’ before letting 
>> target
>> continue. NRF52 target is not reset when debugger is attached (we could do 
>> that,
>> just haven’t done the scripts like that for this platform).
>> 
>>> All the output is as expected until:
>>> 
>>> (gdb) p g_os_time
>>> $1 = 1143079104
>>> (gdb) c
>>> Continuing.
>>> …
>>> (gdb) p g_os_time
>>> $2 = 1143079104
>>> (gdb) c
>>> 
>>> So the g_os_time is not incrementing, which indicates, I believe, that the 
>>> actual program is not running, even though it seems to claim it is. Indeed, 
>>> that value never changes, even across builds/loads/debugs. Always the same. 
>>> (Variables won’t, constants aren’t and all that)
>>> 
>> 
>> What do system registers look like? I.e. is it executing bootloader or the 
>> app?
>> 
>>> If the problem is that it never actually starts th

Re: Address Randomization in net/nimble

2016-02-09 Thread will sanfilippo
Not sure if this was answered, but I do think the simple form of random 
addresses is either close to being there or not alot of work. Would have to 
look into this a bit more to be sure.


> On Feb 6, 2016, at 1:04 PM, Sterling Hughes  wrote:
> 
> Howdy:
> 
> How hard is it to get address randomization in the net/nimble stack?  I 
> realize that full Bluetooth security is probably a month or two away, but 
> would it be possible to just provide the randomization component of it, for 
> privacy and tracking considerations?
> 
> Sterling



Re: OS Task Statistics

2016-01-28 Thread will sanfilippo
There are some other interesting OS related things we could keep track of. Note 
that these did not all come from me; others had input. I dont want to take all 
the credit; nor all the blame, lol.

1) Maximum amount of time interrupts were disabled.
2) Maximum amount of time a task stayed awake (i.e. time between waking up and 
going back to sleep on its own).
3) Maximum amount of time a task that a task spent waiting to run.
4) For semaphores and mutexes, we could also keep track of the maximum amount 
of time that a semaphore or mutex was held.


> On Jan 27, 2016, at 2:44 PM, Sterling Hughes  wrote:
> 
> Heehaw,
> 
> I'm looking to add statistics to the core RTOS (libs/os), to improve
> instrumentation.
> 
> Here are the commands, and data I'm bring back in those commands.  I'd
> love people's input on what else they think should be included here.
> 
> Taskinfo:
> 
> - Array of tasks, each containing:
>  - Task Name
>  - Priority
>  - Number of context switches
>  - Task Run Time
>  - State (RUN, SLEEP, SEM_WAIT, MUTEX_WAIT)
>  - Stack Usage
>  - Stack Size
>  - Last Sanity Checkin
>  - Next Sanity Checkin
> 
> Memory Pool Info:
> 
> - Array of memory pools, each containing:
>  - Memory Pool Name
>  - Pool Element Size
>  - Number of blocks in the pool
>  - Number of free blocks in the pool
>  - Address of last free, and allocate from this pool
>(Should this be a variable size array?)
> 
> Also, right now memory pools are not centrally linked.  This change
> would require there to be a list of all memory pools initialized by
> the system, adding 4 bytes to the mempool structure, and 4 bytes of
> .bss.  Any objections?
> 
> Sterling



OS time tick initialization

2016-02-02 Thread will sanfilippo
Hello:

Porting the OS to the nrf51 has exposed an issue for certain cortex-M MCU’s, 
namely the lack of SysTick. Furthermore, it may be advantageous from a power 
perspective to use a different timer for the OS time tick. Thus, the problem is 
this: how does the developer pick the timer to use for the os time tick?

Personally, I think this is a project/os configuration option. Placing this 
decision in hw/mcu would force every project that used the MCU to use a 
particular timer. Putting it in the bsp is slightly better, but then every 
project using that BSP would use the timer chosen by the BSP. One possible 
benefit to putting this decision in the bsp (or mcu) is that it shields the 
developer from HW/MCU specifics. Not sure that this is a good thing though!

What I am thinking of is something more like this:
* The HAL provides a set of timers to use: rtc, generic timer, cputime, 
systick. Note that some of these currently dont exist. :-)
* The developer has some means of picking one of these HAL timers to use.

If folks agree with the basic idea, any thoughts on how to do this? Should we 
modify the os_init() or os_start() API? Should there be some sort of os 
configuration file per project? In the project egg? In the target?

Will




Re: Moving towards a beta release

2016-02-02 Thread will sanfilippo
Some thoughts (belatedly):

* I dont think we need coredump in B1.
* Do we need a different BLE MCU/transceiver for first release? Not sure if we 
should spend time on the stack as opposed to diverting it to porting to a 
different MCU/transceiver.
* Do we need separate host/controller for first release?



> On Feb 1, 2016, at 8:23 AM, Sterling Hughes  wrote:
> 
> Hiya,
> 
> I think we're getting close to ready for our first beta release.  If you can 
> bear with the long email, please read and give feedback.
> 
> (MENTORS: this would be a good one to review)
> 
> In my mind, the release schedule looks like:
> 
> - Feb 12th, B1
> - March 12th, B2
> - April 12th, Release X
> 
> Where X is something like 0.8 or 0.9 -- I don't think we're quite at a 1.0 
> yet, but we're definitely well beyond a 0.1 release.
> 
> In my mind, remaining for a B1 release is:
> 
> - ASF copyright headers: we've been lax in updating these, most of which say 
> (c) Runtime, licensed under Apache 2.  These need to be changed to the 
> standard Apache project headers.
> 
> - Log & Statistics cleanup: I've done most of the infrastructure, but we need 
> to count statistics and use our logging infrastructure throughout the code.
> 
> - Project cleanup: Blinky & Slinky are our two main projects (Blinky is the 
> basic setup, and Slinky is a more full-featured example.)
> 
> - Coredumps?   Do we need these in B1?
> 
> - Release packaging: how is Mynewt distributed?  Do we branch the larva 
> repository, and distribute newt binaries that pull from a specific branch of 
> ASF infrastructure?  Do we package up all of larva to begin with?
> 
> - JIRA: We need to get our bugs & features into JIRA, along with links to 
> JIRA from the Mynewt website.
> 
> Between B1 & B2, I suspect the majority of our efforts are going to be 
> focused on:
> 
> - Testing & Docs: The unit test framework has been unevenly used throughout 
> the various packages.  We'll need to spend some significant time updating and 
> adding unit & regression tests.
> 
> I'd say the docs as-is need a fair hand on them from all the developers as 
> well.   Aditi has done a great job getting the infrastructure up, but one 
> person cannot document the entire thing!
> 
> - Board Ports: In order to increase adoption we're going to need a variety of 
> board ports, along two vectors:
>   a- Maker/Common Boards: the OS should run on the boards a developer or 
> hobbyist is likely to have on them.   This means Arduino, etc.
>   b- Diversity of BLE chipsets: Right now Nordic NRF52 and shortly NRF51 are 
> supported.  We're going to need to expand this beyond Nordic in the first 
> release.
> 
> - HAL interfaces: we're missing some crucial ones ATM (e.g. SPI), and this 
> will need to get a little better.
> 
> - Continued BLE development:
>  - While we've got a compliant implementation now, we're going to need to 
> further develop things like BLE security, etc.
> 
> Finally, between B2 and Release, I think we're going to be looking at:
> 
> - Bugfixing & Documentation
> 
> - Some assorted board ports, if low risk.
> 
> - More regression testing.
> 
> Thanks,
> 
> Sterling



Re: OS time tick initialization

2016-02-02 Thread will sanfilippo
Well, that would have been a better way to put it if you ask me. :-)

A good example of picking a different timer would be time accuracy vs power 
related savings. For example, a device that is constantly powered may want to 
use a timer off a high accuracy crystal as opposed to one that is lower 
accuracy but conserves power. Another reason might be timer capabilities; one 
timer may be able to do more/less than another timer and application 
requirements could force use of a different timer. I agree, “generally” we 
could pick the correct timer but not in every case (imo).

I see what you are proposing. Well, at least I think so. The only thing I am 
not sure of in your proposal is where in the MCU specific directories this 
would go. What would the API call be and what file would it reside in?

Will


> On Feb 2, 2016, at 12:03 PM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Will - 
> 
> I didn't mean anything too harsh by calling it insanity- what I meant was "it 
> seems really hard to have every build project define the cpu system timer."  
> 
> What are the specific cases where you'd define which system timer to use on a 
> per-project basis?  I could potentially see scaling CPU usage during system 
> operation- but that seems like a different API to me.  
> 
> I was proposing that we make it default per MCU, and optionally per-BSP by 
> using the newt capability API to create a define in the MCU specific 
> directories (-DUSE_BSP_TICKER) which would ifdef away the OS ticker and allow 
> the BSP to override it.
> 
> Sterling. 
> 
> 
>> On Feb 2, 2016, at 11:37 AM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> Insanity? That is a bit harsh dont you think? I dont think using words like 
>> “insanity, ridiculous, stupid, etc" are conducive to having folks contribute 
>> to the project. Just my opinion…
>> 
>> I am not quite sure what you are proposing, meaning I would not know what to 
>> implement based on your reply. It must be my insanity.
>> 
>> 
>>> On Feb 2, 2016, at 10:21 AM, Sterling Hughes <sterl...@apache.org> wrote:
>>> 
>>> 
>>> 
>>>> On 2/2/16 10:15 AM, will sanfilippo wrote:
>>>> Hello:
>>>> 
>>>> Porting the OS to the nrf51 has exposed an issue for certain cortex-M 
>>>> MCU’s, namely the lack of SysTick. Furthermore, it may be advantageous 
>>>> from a power perspective to use a different timer for the OS time tick. 
>>>> Thus, the problem is this: how does the developer pick the timer to use 
>>>> for the os time tick?
>>>> 
>>>> Personally, I think this is a project/os configuration option. Placing 
>>>> this decision in hw/mcu would force every project that used the MCU to use 
>>>> a particular timer. Putting it in the bsp is slightly better, but then 
>>>> every project using that BSP would use the timer chosen by the BSP. One 
>>>> possible benefit to putting this decision in the bsp (or mcu) is that it 
>>>> shields the developer from HW/MCU specifics. Not sure that this is a good 
>>>> thing though!
>>>> 
>>>> What I am thinking of is something more like this:
>>>> * The HAL provides a set of timers to use: rtc, generic timer, cputime, 
>>>> systick. Note that some of these currently dont exist. :-)
>>>> * The developer has some means of picking one of these HAL timers to use.
>>>> 
>>>> If folks agree with the basic idea, any thoughts on how to do this? Should 
>>>> we modify the os_init() or os_start() API? Should there be some sort of os 
>>>> configuration file per project? In the project egg? In the target?
>>>> 
>>> 
>>> I definitely don't think this should be per-project: that's insanity.
>>> 
>>> Generally it's clear on an MCU which timer is best suited for the OS to run 
>>> off of.  If we want to override this depending on a BSP, I think we could 
>>> have the BSP export a capability BSP_SYSTICK, and if that capability is 
>>> specified, the MCU definition will not include the OS definition.
>>> 
>>> Sterling
>> 



Re: OS time tick initialization

2016-02-03 Thread will sanfilippo
That is a good question; wish I had a good answer. For now we could kick the 
can down the road and simply pass the priority in the API. Or just have the BSP 
program the interrupt priority to 1 less than the lowest priority. We could 
also return the interrupt vector used by the BSP to the OS and have the OS set 
the priority.


> On Feb 3, 2016, at 10:28 AM, p...@wrada.com wrote:
> 
> I just pulled your changes and went to implement os_bsp_systick_init().
> 
> It was really just moving systick_init from os_arch_arm.c to os_bsp.c.
> However, the current sys tick_init code needs an interrupt vector priority
> which depends on the priority of the SVC.  How should we resolve the
> interrupt priorities between the BSP and the OS.
> 
> On 2/2/16, 2:56 PM, "will sanfilippo" <wi...@runtime.io> wrote:
> 
>> IMO, I dont think any “OS" related things belong in hw/mcu. Something
>> like “os_setup_timer()” should not be placed in any hw/mcu directory. The
>> hw/mcu directories provide a standard interface to the outside world
>> through a HAL. The HAL should have timer interfaces (RTC, generic timer,
>> etc). Note that the CMSIS-HAL already provides a SysTick config API. One
>> of these HAL interfaces should be used for the os time ticker.
>> 
>> BTW, older versions of the code did have the os time tick API in the bsp.
>> It is still there in os_arch.h; it is called os_bsp_systick_init().
>> 
>> So here is a solution that might be fine for all: we use the bsp
>> interface and that calls out a specific HAL interface provided by the
>> MCU. This will be SysTick for MCU’s which support it; otherwise the bsp
>> will use a different HAL timer (provided by the MCU).
>> 
>> Sound good?
>> 
>> Will
>> 
>> 
>> 
>>> On Feb 2, 2016, at 2:43 PM, p...@wrada.com wrote:
>>> 
>>> I¹ll throw in my support as well.
>>> 
>>> Certainly on some processors different timers have different current
>>> draw
>>> and require different sleep states.  For wearables, this is super
>>> critical
>>> for battery life.  I expect that folks will want to use alternate timers
>>> for the system tick to maximize battery life.
>>> 
>>> Maybe I¹m saying the same thing as Will, but I think the right approach
>>> is
>>> to have the system tick defined in the MCU but be able to be turned off
>>> like Sterling said.  If I were implementing a board with the MCU and I
>>> turned off the standard system tick, I¹d want to implement my systick
>>> code
>>> in my BSP. 
>>> 
>>> Paul
>>> 
>>> On 2/2/16, 2:24 PM, "marko kiiskila" <ma...@runtime.io> wrote:
>>> 
>>>> I agree with Sterling.
>>>> 
>>>> MCU specific code seems like the best place to keep this. For the cases
>>>> where BSP wants to use a non-default
>>>> timer, it can influence MCU compilation.
>>>> 
>>>> 
>>>>> On Feb 2, 2016, at 2:05 PM, Sterling Hughes <sterl...@apache.org>
>>>>> wrote:
>>>>> 
>>>>> I think even in those use cases, that would probably apply per-BSP.  I
>>>>> don't think its very common that the OS timer would be used
>>>>> differently
>>>>> within the same BSP.  Have you/has anyone seen cases where the only
>>>>> difference was the time source, but otherwise the BSP was identical?
>>>>> 
>>>>> In order to setup the compile options: you would have a function to
>>>>> setup the OS time tick, that would get defined by the MCU.  You would
>>>>> surround that function with:
>>>>> 
>>>>> #ifndef USE_BSP_TICKER
>>>>> int
>>>>> os_setup_timer()
>>>>> {}
>>>>> #endif
>>>>> 
>>>>> Then, if the BSP wanted to override the os_setup_timer(), it would
>>>>> export the following capability:
>>>>> 
>>>>> BSP_OS_TICKER_DEF
>>>>> 
>>>>> Each MCU must, in order to respect that setting have the following in
>>>>> it's egg.yml file:
>>>>> 
>>>>> egg.clags.BSP_OS_TICKER_DEF: -DUSE_BSP_TICKER
>>>>> 
>>>>> Sterling
>>>>> 
>>>>> On 2/2/16 12:41 PM, will sanfilippo wrote:
>>>>>> Well, that would have been a better way to put it if you ask me. :-)
>>>>>> 
>>>>>> A good example of picking a dif

Re: our HAL and the new mbed-hal

2016-02-28 Thread will sanfilippo
Given the current state of the Mynewt hal, I think the question we need to 
answer is whether or not the mbed hal provides the functionality we think 
developers will need. Looks like what mbed is doing and mynewt is doing are 
very similar. Why not just co-opt the mbed HAL entirely? I cant think of a good 
reason not to, but I have not looked at the mbed hal in enough detail, 
especially in regard to bsp.

Anyway, i dont think it will be hard to map mynewt to mbed. I just dont like it 
if it adds another layer of indirection (i.e. efficiency).

Wll
 
> On Feb 27, 2016, at 1:55 PM, Sterling Hughes  wrote:
> 
> Hi,
> 
> I posted:
> 
> https://issues.apache.org/jira/browse/MYNEWT-174
> 
> If folks have a little spare time to look at this over the weekend, I'd be 
> super appreciative for any thoughts people have.
> 
> I've spelled out what I'm thinking in the comments section.  But the summary 
> here is: it would be good to have some re-use of the mbed-hal work, and not 
> force chip vendors who are doing new microcontrollers to implement both 
> mbed's hal and ours.
> 
> The ideal case would be that we can map our HAL to Mbed's HAL, and then find 
> someway that we can use our package system to include all the mbed-hal 
> libraries.  That way, for ARM Cortex-M* platforms, we can share effort on the 
> HAL.
> 
> And the benefit to keeping our HAL, and developing against it -- we can be 
> microcontroller architecture (i.e. non-ARM) agnostic.
> 
> Anyhow, please do look through the mbed-hal, and an implementation:
> 
> https://github.com/ARMmbed/mbed-hal (top-level HAL apis)
> https://github.com/ARMmbed/mbed-hal-silabs (general silabs hal)
> https://github.com/ARMmbed/mbed-hal-efm32gg (EFM32 silabs chipset impl)
> 
> (there are more links in the ticket)
> 
> Thoughts?  Issues?
> 
> Sterling



Re: our HAL and the new mbed-hal

2016-02-29 Thread will sanfilippo
Sterling (and all):

>> I think we will need to: mbed-hal is being developed only for ARM 
>> processors.  If we want to use the HAL on other MCUs (MIPS, 8051, Intel, 
>> etc.) -- it would be good to have this mapped.


I am probably getting confused by how you worded that statement but the hal 
they present is generic and would work for any MCU. Yes, only MCU vendors with 
cortex -M processors are mapping their HALs to mbed, but if we decided to adopt 
the mbed HAL it would be easy to map other MCU’s to the mbed hal. Pretty much 
exactly the same amount of work it would take to map mynewt to their HALs.

I certainly understand why we might want to map the mynewt hal to mbed: do it 
once and then you get all the mbed hal ports for free but like I said; we 
should take a serious look at whether or not we just want to use the mbed hal 
in that case. Yes, I am repeating myself :-)

Will


> On Feb 28, 2016, at 9:40 PM, marko kiiskila <ma...@runtime.io> wrote:
> 
> 
>> On Feb 28, 2016, at 7:06 PM, Sterling Hughes <sterl...@apache.org> wrote:
>> On 2/28/16 10:02 PM, will sanfilippo wrote:
>>> Given the current state of the Mynewt hal, I think the question we need to 
>>> answer is whether or not the mbed hal provides the functionality we think 
>>> developers will need. Looks like what mbed is doing and mynewt is doing are 
>>> very similar. Why not just co-opt the mbed HAL entirely? I cant think of a 
>>> good reason not to, but I have not looked at the mbed hal in enough detail, 
>>> especially in regard to bsp.
>>> 
>>> Anyway, i dont think it will be hard to map mynewt to mbed. I just dont 
>>> like it if it adds another layer of indirection (i.e. efficiency).
>>> 
>> 
>> I think we will need to: mbed-hal is being developed only for ARM 
>> processors.  If we want to use the HAL on other MCUs (MIPS, 8051, Intel, 
>> etc.) -- it would be good to have this mapped.
>> 
>> It seems like a lot less work to map the API into ours (once), then have to 
>> maintain mbed-hal separately from ARM for non-ARM MCUs.
> 
> mbed-hal has almost the same scope as ours. And we have not develop ours very 
> far.
> I do not like unneeded layers of indirection either, but mbed folks might 
> raise objections
> if we took their HAL and implemented it for other architectures.
> 
> However, Will’s point is valid: drivers do not depend on this API only, there 
> is also per-BSP
> definition of per-peripheral config. We would have to adapt that stuff as 
> well.
> 
> I have not scrounged driver sources; there might be other dependencies 
> (probably limited
> though) to other things mbed. I.e. how to sleep, get system time etc.
> 
> I do not know the right answer here, just wondering what we’d encounter. I 
> guess one way
> to explore the space would be to try it out.
> 
> Not sure if helpful,
> —
> M



Re: Tutorial topics for Apache Mynewt

2016-02-29 Thread will sanfilippo
I think these are a good representative list given the current state of MyNewt.


> On Feb 26, 2016, at 5:23 PM, aditi hilbert  wrote:
> 
> Hi everyone,
> 
> With the first release of Apache Mynewt poised to be unleashed to the world, 
> I’d like to brainstorm some tutorial topics to get people trying out the OS 
> and seeing how easy it is to use. Let’s try to come up with 10 tutorial 
> topics. 
> 
> Here are a few I thought of. I’d like us to come up with at least 10 
> additional tutorials. And yes, it would mean doing them and documenting them. 
> Feel free to pick the list apart and suggest your own ideas. And we can come 
> up with a final list and vote.
> 
> 1. How to create a custom LED blink pattern on the STM32F Discovery board 
> from STMicro 
> 2. Turn on the LED x mins after specified wall clock time (like security 
> lights that automatically on after 6 pm)
> 3. How to define a new event or statistic (e.g. available memory is less than 
> a specified threshold) and log an alert (or read it with newtmgr)
> 4. How to write a test utility for a pkg
> 5. How to plug in a different file system instead of nffs (say, yaffs ?)
> 6. Connect a digital sensor to a board (Arduino?) via GPIO or UART, detect 
> and log level changes.
> 7. Quiz buzzer - scan the push button input and display the corresponding 
> number on a display
> 8. Build a BLE beacon that broadcasts some internal information 
> (manufacturing specific info or firmware info)
> 9. Query your Mynewt BLE device (board) remotely via console terminal 
> 10. Slinky - this is already there but could do with some embellishment esp. 
> in the documentation.
> 
> thanks,
> aditi



Re: Question on nlip protocol and log messages

2016-02-26 Thread will sanfilippo
Gordon:

I probably should not answer since I have not looked at the logging in detail, 
but the console itself does not protect against a task getting preempted. Thus, 
at times, the output to the console gets all jumbled up.

Hope I am answering your question :-)


> On Feb 26, 2016, at 3:58 PM, Gordon Chaffee  wrote:
> 
> I ran into an issue with log messages and the nlip protocol when
> communicating between newtmgr and a sim target. If a log message occurs
> after the request is made but before the response is written, it causes the
> stream to get out of sync, and newtmgr is unable to parse the response
> since it expects a json response.
> 
> In my case, I ran into it while adding a 'logs append ' command to
> test the logging facility, so the log message is written to the console and
> interrupts the nlip stream 100% of the time. Outside of the specific case
> I've run into, I think a context switch to another task that writes a log
> message could interrupt any message exchange.
> 
> Is my thinking correct about the potential problems of log messages being
> written to console?
> 
> Thanks,
> Gordon



Fwd: hal organization and multiple smaller packages

2016-02-22 Thread will sanfilippo
Sorry all; thought this was addressed to dev

> Begin forwarded message:
> 
> From: will sanfilippo <wi...@runtime.io>
> Subject: Re: hal organization and multiple smaller packages
> Date: February 22, 2016 at 3:42:15 PM PST
> To: sterl...@apache.org
> 
> See comments
> 
>> On Feb 22, 2016, at 2:58 PM, Sterling Hughes <sterl...@apache.org> wrote:
>> 
>> 
>> 
>> On 2/22/16 1:24 PM, will sanfilippo wrote:
>>> My 1/2 cent on this topic (and I certainly dont think you killed the 
>>> discussion; it is a difficult topic):
>>> 
>>> * I think the HAL is meant to be a fairly general, simple, abstraction. 
>>> Hopefully over time we will be able to incorporate more advanced HAL 
>>> features, but most HALs I have seen implement the basics and I bet that 
>>> works for most folks.
>>> * I think the HAL should live in hw/mcu. Well, api in hw/hal and 
>>> implementation in hw/mcu.
>>> * As sterling says, drivers can be built that use the HAL. Take the 
>>> external ADC example. There would be a driver for that ADC chip that would 
>>> use a SPI HAL if it had SPI. For internal ADCs, the HAL provided in hw/hal 
>>> should be enough as I suspect it will (eventually), implement what most 
>>> folks want.
>>> * As far as being able to see what features of a HAL are implemented, I 
>>> dont see why this is such a problem but it is probably because I am not 
>>> thinking of “beginner” users. Doesnt seem terribly difficult to document, 
>>> on a per mcu basis, which features of the HAL are supported by that 
>>> particular MCU. And if the developer calls an API with some parameters that 
>>> are not implemented on this MCU they get an error. Part of the problem I 
>>> have with this is is my own personal bias: I would never blindly call HAL 
>>> functions without first reading the chip documentation. I dont see why 
>>> anyone would do such a thing :-)
>>> * I am not a fan of runtime HAL introspection APIs. To me, that is just 
>>> extra code that serves very little useful purpose.
>> 
>> I agree with no runtime, but think there should be capabilities on a more 
>> granular basis.
> 
> Sorry, I did not mean to imply that there should not be more granular 
> capabilites. I think there should be. Having ways to insect packages for 
> these capabilities so developers can easily see what our HAL supports is a 
> good idea. I just dont think this needs to be made overly complicated is all.
> 
>> 
>> i.e. a driver can require a hal-adc, or hal-gpio capability, and an MCU can 
>> export these, rather than the just "hal."
>> 
>>> * I think the HALs should allow for the user to choose which “peripheral” 
>>> to bind to and that is done through the BSP or the HAL API itself. For 
>>> example, the user should be able to pick ADC #1 or ADC #3.
>>> * I do agree that sometimes it is difficult to know that you need to call 
>>> functions like bspProvideADCconfig() and the like. Not sure how this gets 
>>> solved other than documentation and looking at examples that we provide.
>>> 
>> 
>> IMO, the HAL should provide APIs to do this, and the BSP should call those 
>> APIs.
> Not sure what you mean exactly. Is it the same as our current structure? The 
> project code calls hal api which in turn call bsp api?
> 
>> Did you see other email: what do you think about flash?  Should the HAL APIs 
>> just apply to internal flash / should we get rid of HAL Flash altogether, or 
>> should HAL flash encompass both internal & external flashes...?
>> 
>> Sterling
> I did :-) I have not given it enough thought so I dont think I have an 
> intelligent answer. However, the idea of a HAL flash appeals to me. Question: 
> if we said the hal flash should be internal only, how do we deal with 
> external flashes? Library code calls driver code?



Re: Bye Bye Eggs :-(

2016-02-24 Thread will sanfilippo
application/project

> On Feb 24, 2016, at 1:56 PM, Sterling Hughes  wrote:
> 
> 
> 
> On 2/24/16 1:06 PM, aditi hilbert wrote:
>> Sorry to pipe up late and I know how involved the changes are but I need to 
>> understand the reasoning better to be able to document properly.
>> 
>> For the most part I get the changes and agree with them. The only one that I 
>> am struggling with is “app” instead of “nest”. The term “application" 
>> doesn’t quite convey the sense of a collection (repo) even though that’s 
>> what it is (our larva, tadpole etc.). And the packages in such a nest 
>> (legacy term) could be composed to enable different applications in the real 
>> world from a user perspective. I am wondering whether “workspace” or “app 
>> container” or simply “repo" conveys the meaning better.
>> 
> 
> I really didn't like "repo" or "repository" -- it made sense to me, but 
> people got confused by git repository vs our repository.
> 
> "workspace" is good too, and I'm happy to change it if people prefer that.  I 
> did application because that was the more common term (ruby on rails, node, 
> etc.)   That said, this is kinda a different space.
> 
> For context, an application is where you keep all of your packages for a 
> class of device.  Projects are where the main() function resides, and specify 
> the set of linked packages that compose software that gets built.  So think 
> of project as the top level src/ directory, and an application as a 
> combination of src/ and any linked libraries.
> 
> I'd really be interested in other people's thoughts here, what makes more 
> sense to you:
> 
>  [  ]  workspace/application
>  [  ]  application/project
> 
> Sterling
> 



Re: PWM API

2016-03-30 Thread will sanfilippo
I am not a huge fan of the API using clock ticks as opposed to time. I wonder 
if the API should just be frequency and duty cycle. If the underlying HW cant 
support it it can return an error. I have not thought this through completely 
so I am sure there is some reason this is not good :-)

I guess one possible issue is the issue where the HW cant support the exact 
frequency. For example, I ask for 1kHZ but can only get 1.1 or .9.

Anyway, I dont have an alternate proposal so maybe I shouldnt comment :)

Will

> On Mar 30, 2016, at 9:56 AM, p...@wrada.com wrote:
> 
> 
> 
> A quick discussion thread on the PWM API.  I currently proposed a PWM API 
> with five simple methods in my PWM git pull request.
> 
> hal_pwm_set_period(uin32_t usec);
> hal_pwm_set_on_duration(uint32_t usec);
> hal_pwm_on();
> hal_pwm_off();
> hal_pwm_create();
> 
> The on/off/create APIs are fine, and I don't intend to change them.
> 
> But the setting APIs assume a lot about the underlying PWM controller setup.  
> Mainly that it has lots of resolution (clock rate) and lots of precision 
> (bits of PWM register). The goal of this simple API was to abstract the HW 
> enough to make it simple to use, but I think I've got overboard and made it 
> unusable.  So I want to propose a new HAL API like this.
> 
> /* returns the count frequency for the underlying PWM. This is set by the BSP 
> and HW limitations */
> uint32_t hal_pwm_get_clock_freq_hz(void)
> 
> /* returns the register size of the counter regiser for the underlying PWM.  
> This is set the by BSP and HW limitations */
> uint32_t hal_pwm_get_resolution_bits(void);
> 
> /* sets the period of the PWM waveform in clock (clock frequency returned 
> above). Can't exceed the resolution above (2^N) or error */
> Int hal_pwm_set_period(uin32_t clocks);
> 
> /* sets the on duration of the PWM waveform in clocks (clock frequency 
> returned above).  Can't exceed the resolution above or error */
> Int hal_pwm_set_on_duration(uint32_t clock);
> 
> /* sets the duty cycle of the PWM. 0=always low, 255 = always high.
> * Sets the period to the smallest possible to achieve this exact duty cycle. 
> Overwrites any
> * changes made by  hal_pwm_set_period and hal_pwm_set_on_duration. This is 
> designed to be the simple API that folks
> Would use if they just want a duty cycle for controlling LED brightness etc */
> Int hal_pwm_set_duty_cycle(uint8_t frac);
> 
> Comments? This API is a bit more complicated but lets the application know a 
> lot more about the underlying functionality of the PWM provided by the BSP.
> 
> Paul
> 
> 



Re: hal_gpio_toggle() should return changed pin state

2016-04-06 Thread will sanfilippo
Seems fine to me.


> On Apr 6, 2016, at 1:45 PM, Vipul Rahane  wrote:
> 
> Hello,
> 
> Current definition: void hal_gpio_toggle() doesn’t return anything. 
> Proposed definition: int hal_gpio_toggle(). I want to do this so that it 
> returns the pins changed state. I would like to make this change so that we 
> know what the gpio is actually set to.
> 
> Both ways we do hal_gpio_read(). Thoughts ?
> 
> Regards,
> Vipul Rahane



Re: Timestamp for logs

2016-04-08 Thread will sanfilippo
It does seem like all those bits are unnecessary. Either 48 or 64 bits seems 
plenty. One note: if we want to use the same logging infrastructure for the 
controller in the ble stack we will need sub-millisecond precision. 1 
microsecond units would be preferable; 16 bits for time since last second is 
not great given BLE timing requirements but could work… That is like 15 or 16 
usec precision, right?


> On Apr 8, 2016, at 7:33 AM, Sterling Hughes 
>  wrote:
> 
> 
> 
>> On Apr 8, 2016, at 6:11 AM, Christopher Collins  wrote:
>> 
>>> On Thu, Apr 07, 2016 at 10:33:26PM -0700, Vipul Rahane wrote:
>>> Hello,
>>> 
>>> I agree with your statement. We do not know on what kind of devices
>>> Mynewt would be ported to. Sleepy devices which are meant to work for
>>> 20 years running on a single coin cell battery will rollover the time
>>> stamp in 2038. We want to be able to take care of such a situation.
>>> While there are other solutions which can be implemented that are more
>>> efficient, keeping it as simple as possible is better from an end to
>>> end perspective as these logs would be used by applications to
>>> understand the state of the devices. 
>>> 
>>> I was planning on storing microseconds because the OS currently
>>> populates OS time in seconds and microseconds. For microseconds we do
>>> require 32 bits. I agree for milliseconds 16 bits are enough but
>>> higher resolution is always better.
>> 
>> I think 12 bytes of time is more than necessary.  A few notes:
>> 
>> * A single 64-bit microsecond counter allows for 584942 years before
>> rollover.
>> 
>> * A single 32-bit second counter won't actually roll over until 2106
>> (the 2038 issue only applies to signed 32-bit timestamps).
>> 
>> If we want microsecond precision, I would just go with a single 64-bit
>> counter.  Otherwise, 32 bits of seconds is sufficient in my opinion.
>> 
> 
> +1. That was the original thought.  Underlying counters may not be at that 
> precision- but that doesn't mean you can't store it as microseconds 
> 
>> Chris
>> 
>>> 
>>> Regards,
>>> Vipul Rahane
>>> 
 On Apr 7, 2016, at 10:02 PM, Justin Mclean  wrote:
 
 HI, 
 
> I am going to change the log structure so that it stores both(UTC 
> timestamp in seconds - 64 bit, Microseconds since last second - 32 bit)
 
 NO objection, but just out of interest why 64 bit for seconds (when 32 bit 
 of seconds = 60+ years and good until 2038) and 32 bits for milliseconds 
 when 16 bits will do? See also [1]
 
 Thanks,
 Justin
 
 1. https://en.wikipedia.org/wiki/Year_2038_problem#Solutions
>>> 



Re: pull request for ADC and PWM APIs

2016-03-24 Thread will sanfilippo
e don¹t
> have the BSP dispatch on every API call.
> * This would use 8 bytes of code space for each device (const) used and
> simple function to fetch the device from the library.
> 
> * This uses way less code space as we don¹t have all the shim code for
> every hal. The hal shim could be a simple inline like above.
> * This would dispense with the sysid concept which might make the whole
> BSP mapping simpler.  There¹s really no need to map all devices to
> Some arbitrary sysid just so we can map them back to a device.
> * An application could just extern the const structures and use
> _hal_device in the code which would not take RAM. With the sysid
> approach these
>  Would be constants and also take no RAM.
> * A library would have to store space for all the pointers which is 4
> bytes * number of devices.  The sysid model would depend on the size of
> the sysid.  
> Could be as small as 1 byte, so the sysid approach is BETTER for RAM
> usage in a library.  Imagine 20 GPIOs.  Its either 20*1 = 20 bytes of RAM
> (sysid) or
> 20 * 4 = 80 bytes of RAM.
> 
> The RAM usage concerns me a bit, but I think the idea of removing the
> sysid seems like it would make things overall simpler for the user.
> 
> Comments?
> 
> 
> 
> 
> 
> 
> 
> 
> On 3/23/16, 4:59 PM, "will sanfilippo" <wi...@runtime.io> wrote:
> 
>> I think the hardest part for me to ³get over² (if you will) is the fact
>> that hal_xxx_init() does not return a pointer to something and that each
>> of the API has to call the BSP function every time. However, I do
>> understand why you would want to do it the way you did (at least I think
>> I understand).
>> 
>> I do wish that some of the API were a bit more abbreviated but that is
>> only because i dont like typing :-)
>> 
>> And btw, I am interested in hearing the answer to marko¹s questionŠ
>> 
>> Will
>> 
>>> On Mar 23, 2016, at 4:03 PM, marko kiiskila <ma...@runtime.io> wrote:
>>> 
>>> Good stuff.
>>> 
>>>> On Mar 22, 2016, at 5:21 PM, p...@wrada.com wrote:
>>>> 
>>>> All,
>>>> 
>>>> I'm having so much fun with my newt. Please comment and help me
>>>> improve this work.
>>>> 
>>>> I've submitted two HAL API pull requests.   They are to add new
>>>> HAL_xxx.h files for two new sets of core functionality: ADC and PWM.
>>>> 
>>>> When designing these hal_xxx.h interfaces, I considered the APIs from
>>>> mbed-hal and from arduino-hal.  I treated these as "enough" with a few
>>>> caveats.
>>>> 
>>>> 1.  Generally, APIs that set specific state of devices seems hard to
>>>> maintain and the system designer will have to know about them anyway.
>>>> So I made Apis in the ADC hal to query the resolution and reference
>>>> voltage rather than set them. They will be set by the MCU unless
>>>> configurable, then they can be set by the BSP
>>>> 2.  There were duplicate ways in mbed to set PWM duty cycle. Rather
>>>> than implement them all, I implemented a sufficient subset. Future
>>>> versions could expand on this, or someone could write a helper library
>>>> on top of it to covert between the various methods.
>>>> 
>>>> https://github.com/apache/incubator-mynewt-core/pull/22 - this is a
>>>> pull requests for the hal api for the Pulse Width Modulation. .
>>>> https://github.com/apache/incubator-mynewt-core/pull/21 - this is the
>>>> pull request for the hal API for the Analog to Digital Converters.
>>>> 
>>>> Underlying HAL philosophy.  I tried to following this philosophy when
>>>> doing the interface.
>>>> 
>>>> 1.  A given device may support multiple channels (I.e. One ADC
>>>> controller can sample 8 ports). Its needs a single driver since its one
>>>> device multiplexed.
>>>> 2.  A number of different PWM/ADC devices can be supported at the same
>>>> time.
>>>> 3.  Its possible and likely that there will be N instances of the same
>>>> Device driver with just different state (e.g. Two ADC devices with 8
>>>> channels each that are identical except for memory map).
>>> 
>>> That¹s good.
>>> 
>>>> So I implemented the HAL as follows:
>>>> 
>>>> 1.  for each hal_xxx.h there is a set of sysid (system_ids). These
>>>> represent individual xxx (e.g. ADC) resources in the system.  The
>>>> adc_sysid and pwm_sydid

Re: [1/3] incubator-mynewt-core git commit: Add read supported commands and read local supported features commands

2016-03-23 Thread will sanfilippo
Yes, I did push those changes to develop. Please given them a whirl.

Nimble Side Notes:
1) I attempted to create a controller only application. As we expected, there 
are two functions that are missing (the ones that the controller uses to send 
the host data and events). A bit unexpected was the hci command pool and 
associated os events. A simple (possibly) temporary fix was to move those 
declarations into the common hci definition (duh!). I did that and now there 
are only those two functions to deal with. More on that later...

2) I looked over the LE controller requirements (Vol 2 Part E 3.19). The only 
unsupported mandatory command is Test End. We will add that shortly as well as 
the other test commands (rx and tx).


> On Mar 23, 2016, at 7:05 AM, Sterling Hughes 
>  wrote:
> 
> Hi Will,
> 
> I think some folks were looking for these features, should they try develop 
> branch to test them out?  
> 
> Cheers,
> Sterling 
> 
> 
> Begin forwarded message:
> 
>> From: w...@apache.org
>> Date: March 22, 2016 at 11:13:29 PM PDT
>> To: comm...@mynewt.incubator.apache.org
>> Subject: [1/3] incubator-mynewt-core git commit: Add read supported commands 
>> and read local supported features commands
>> Reply-To: dev@mynewt.incubator.apache.org
>> 
>> Repository: incubator-mynewt-core
>> Updated Branches:
>> refs/heads/develop e233384b4 -> 8a7eb7d48
>> 
>> 
>> Add read supported commands and read local supported features commands
>> 
>> 
>> Project: http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/repo
>> Commit: 
>> http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/commit/8a7eb7d4
>> Tree: 
>> http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/tree/8a7eb7d4
>> Diff: 
>> http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/diff/8a7eb7d4
>> 
>> Branch: refs/heads/develop
>> Commit: 8a7eb7d4817d935feb8cf8492685d9251a7651fd
>> Parents: 3dc5a47
>> Author: wes3 
>> Authored: Tue Mar 22 23:12:38 2016 -0700
>> Committer: wes3 
>> Committed: Tue Mar 22 23:12:43 2016 -0700
>> 
>> --
>> apps/bletest/src/main.c |  10 +
>> .../controller/include/controller/ble_ll_hci.h  |   4 +
>> net/nimble/controller/src/ble_ll_hci.c  |  49 +
>> net/nimble/controller/src/ble_ll_supp_cmd.c | 197 +++
>> net/nimble/host/include/host/host_hci.h |   2 +
>> net/nimble/host/src/host_dbg.c  |  83 +---
>> net/nimble/host/src/host_hci_cmd.c  |  20 ++
>> net/nimble/include/nimble/hci_common.h  | 101 +-
>> 8 files changed, 388 insertions(+), 78 deletions(-)
>> --
>> 
>> 
>> http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/blob/8a7eb7d4/apps/bletest/src/main.c
>> --
>> diff --git a/apps/bletest/src/main.c b/apps/bletest/src/main.c
>> index 8b62c77..0863bf4 100755
>> --- a/apps/bletest/src/main.c
>> +++ b/apps/bletest/src/main.c
>> @@ -792,6 +792,16 @@ bletest_task_handler(void *arg)
>>assert(rc == 0);
>>host_hci_outstanding_opcode = 0;
>> 
>> +/* Read local features */
>> +rc = host_hci_cmd_rd_local_feat();
>> +assert(rc == 0);
>> +host_hci_outstanding_opcode = 0;
>> +
>> +/* Read local commands */
>> +rc = host_hci_cmd_rd_local_cmd();
>> +assert(rc == 0);
>> +host_hci_outstanding_opcode = 0;
>> +
>>/* Read version */
>>rc = host_hci_cmd_rd_local_version();
>>assert(rc == 0);
>> 
>> http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/blob/8a7eb7d4/net/nimble/controller/include/controller/ble_ll_hci.h
>> --
>> diff --git a/net/nimble/controller/include/controller/ble_ll_hci.h 
>> b/net/nimble/controller/include/controller/ble_ll_hci.h
>> index 3e1558f..0f80b54 100644
>> --- a/net/nimble/controller/include/controller/ble_ll_hci.h
>> +++ b/net/nimble/controller/include/controller/ble_ll_hci.h
>> @@ -20,6 +20,10 @@
>> #ifndef H_BLE_LL_HCI_
>> #define H_BLE_LL_HCI_
>> 
>> +/* For supported commands */
>> +#define BLE_LL_SUPP_CMD_LEN (36)
>> +extern const uint8_t g_ble_ll_supp_cmds[BLE_LL_SUPP_CMD_LEN];
>> +
>> /* 
>> * This determines the number of outstanding commands allowed from the
>> * host to the controller.
>> 
>> http://git-wip-us.apache.org/repos/asf/incubator-mynewt-core/blob/8a7eb7d4/net/nimble/controller/src/ble_ll_hci.c
>> --
>> diff --git a/net/nimble/controller/src/ble_ll_hci.c 
>> b/net/nimble/controller/src/ble_ll_hci.c
>> index 7ac07db..e726fd3 100644
>> --- a/net/nimble/controller/src/ble_ll_hci.c
>> +++ b/net/nimble/controller/src/ble_ll_hci.c
>> @@ -123,6 +123,45 @@ ble_ll_hci_rd_local_version(uint8_t 

Re: incubator-mynewt-larva git commit: Fix some version discrepencies in pkg.yml files.

2016-03-03 Thread will sanfilippo
+1. I dont really like author tags myself...

> On Mar 3, 2016, at 4:18 PM, Sterling Hughes  wrote:
> 
> 
> 
> On 3/3/16 4:02 PM, Justin Mclean wrote:
>> Hi,
>> 
>> Just a minor thing I just noticed and certainly not an issue, but the ASF 
>> are not big on author tags.
>> 
>> The PMC can of course discuss and decide what to do here, but you may want 
>> to remove or change them to be “ASF" or “Apache Mynewt".
>> 
>> If they are removed you can see who worked on the file via the commit 
>> history. Author tags in 3rd party code (i.e. code not developed at the ASF) 
>> should be left as it is.
>> 
> 
> +1 this was on my TODO for an email to send out.
> 
> For all standard packages, I think it should be Apache Mynewt as author, and 
> dev@ as the email address.  These fields are really more useful for 3rd party 
> packages that will be developed around the Apache Mynewt core.
> 
> Sterling



Some build issues on master and develop

2016-03-06 Thread will sanfilippo
Hello:

A few things I have noticed about building various targets…

1) The current state of master will generate an error if your project does not 
use baselibc. This is an issue with datetime.c that has been addressed in the 
develop branch but not merged to master.
2) Are all projects expected to build without using baselibc? Some do not 
(well, I know of only one for sure but there could be others).
3) If compiler_def is set to default some targets wont build. This was 
addressed for the m0 but not the m4.

I can address #3. For #1, who should be merging the change from develop into 
master?

Thanks,
Will

Re: Request to have better discipline with commit messages

2016-03-31 Thread will sanfilippo
Hopefully I fixed the author issue; now committing using William San Filippo.

Regarding spaces at the end of lines: did not realize this was an issue at all 
but I am sure I am a repeat offender :-) Easy enough for me to strip trailing 
spaces when I save files with my editor. Might cause a few extra diffs at first 
but whitespace diffs can be ignored…

Will

> On Mar 31, 2016, at 7:41 AM, Sterling Hughes  wrote:
> 
> Hi Johan,
> 
> Welcome :)
> 
> On 3/30/16 10:36 PM, Johan Hedberg wrote:
>> Hi,
>> 
>> The MyNewt project does a quite good job at keeping a consistent coding
>> style through-out the code base, but the current commit history is quite
>> a mess. Would it be possible to introduce some rules that all git
>> commits should follow?
>> 
>> In the git based projects I've used in the past the commit message
>> consists of an initial short summary line (with a short prefix, followed
>> by a colon + space, and max 70 chars or so width to fit on an 80-wide
>> terminal with git shortlog), an empty line, and then the main body of
>> the commit message (also sticking to max 72-74 line length. This makes
>> browsing the history much easier and the output of git commands that use
>> the summary line (like shortlog or request-pull) becomes more readable.
>> 
> 
> +1
> 
>> Another thing that would be nice to get fixed is for everyone to have
>> proper and consistent git author information. If you run
>> "git shortlog -ns" you'll see that some people are duplicated because
>> of different author names at different times. For the existing history
>> this can be fixed by having a .mailmap file with lines like the
>> following (sorry Will for singling you out ;)
>> 
>> Willam San Filippo  
>> 
> 
> +1 Let's do this.
> 
>> As for the coding style, it's indeed very good and consistent across the
>> code base, but a pet-peeve of mine is still the fact that there's quite
>> often trailing whitespace on lines (which shows up as bright red in my
>> editor since my other projects don't tolerate it). My git diff also
>> shows it as bright red but seems there's no special option to enable
>> that, perhaps the following in .gitconfig does it:
>> 
>> [diff]
>> color = auto
>> 
>> Anyway, I'm hoping the project could take this into consideration since
>> it's clear you do value consistency at least for the coding style.
>> 
> 
> We do.  And... We don't have a CODING_STYLE document :(  We've just all 
> worked together so long.  It's on my TOOD, but unfortunately never makes it 
> to the top.  We need it.
> 
> I hear you on the end of line thing, I've worked across projects where this 
> was & wasn't tolerated -- it drove me crazy.  If we're going to adopt this 
> (which I'm for), I think we're going to need some git  hook which 
> automatically checks for this.
> 
> Sterling



Re: more ADC hal discussion

2016-03-28 Thread will sanfilippo
I will second the motion to abbreviate things more :-) While I do like the 
simplicity of the mbed HAL I do realize that it does not support everything we 
want the HAL to do. And unless I am mistaken, it shouldnt be hard to map 
between the two. So I guess this means a +1 from me.

Will

> On Mar 27, 2016, at 2:29 PM, Sterling Hughes  wrote:
> 
> Hey Paul,
> 
> I read through the APIs, I think they look good.  I made a few comments,
> entirely coding standards related.
> 
> There are a few other things I'd like to understand/discuss, which I'll
> post to dev@:
> 
> - Can you post a description of how pins are mapped across MCU, BSP and
> Application?  I think I followed it, but want to make sure we have a
> record.
> 
> - Should system device descriptor be preprocessor directives rather than
> an enum.  Would there be a case where you'd want to do:
> 
> #ifdef SYSTEM_DEV_ADC5
> /* do X */
> #else
> /* do Y */
> #endif
> 
> - hal_adc_get_reference_voltage_mvolts i feel could be shortend to
> hal_adc_refv() or hal_adc_refmv().  Shouldn't this just take a resolution.
> 
> - hal_adc_val_convert_to_mvolts(), should this just be hal_adc_convert()
> and take a resolution.
> 
> Cheers,
> Sterling
> 
> On 3/25/16 4:52 PM, Paul Dietrich wrote:
>> This new implementation is posted as
>> 
>> https://github.com/apache/incubator-mynewt-core/pull/25
>> 
>> Take a look and let me know what you think.  Without negative feedback,
>> I’ll commit on Monday/Tuesday
>> 
>> Paul
>> 
>> On 3/25/16, 2:58 PM, "Paul Dietrich"  wrote:
>> 
>>> Just updating the group with my plan
>>> 
>>> Folks commented offline that they didn¹t like that the mbed hal doesn¹t
>>> allow multiple kinds of devices with the same HAL API at the same time.
>>> 
>>> But they liked the pin mapping and init function that tied it to a pin.
>>> 
>>> So the new API will combine the best of hal_adc3 and hal_adc2.  I¹ll
>>> hopefully post the pull request by the COB or during the weekend.
>>> 
>>> One NOTE.  The memory (RAM) issues of hal_adc2 will be addressed by
>>> getting the device initializer from the BSP.  So the BSP may malloc memory
>>> for these to be efficient.
>>> 
>>> 
>>> 
>>> 
 
>>> 
>>> 
>> 
>> 



FYI: bletiny is over the nrf51 code size limit

2016-03-28 Thread will sanfilippo
Hello:

This is just an FYI: the nrf51 bletiny build is over the code size limit, so if 
you pull develop and build this project it will not link. This will get fixed 
today (hopefully).

Re: os_eventq_get() + timeout

2016-04-04 Thread will sanfilippo
I would not break BC; I would add a different function. Not sure what I would 
call it but wouldnt it just have a timeout, in ticks, associated with it? For 
example: os_eventq_wait(_evq, timeout_in_os_ticks). What is the purpose of 
the mask btw? Something to do with returning an error if it times out or some 
way of selecting particular events?

Will

> On Apr 4, 2016, at 8:14 AM, Sterling Hughes  wrote:
> 
> Hey,
> 
> I'm looking at: https://issues.apache.org/jira/browse/MYNEWT-8
> 
> I'm wondering if I should break BC on this one, and add a new parameter, or 
> add a new function call:
> 
> - os_eventq_select()
> 
> OS_EVENTQ_MASK(my_mask, EVENT_T_TIMER);
> OS_EVENTQ_MASK(my_mask, EVENT_T_DATA);
> 
> /* timeout after 200 ticks */
> ev = os_eventq_select(_evq, _mask, 200);
> 
> Thoughts?
> 
> Sterling
> 
> PS: For the uninitiated, os_eventq_get() works as follows.
> 
> In your task, you create an eventq with os_eventq_init(), and then you wait 
> (forever) on os_eventq_get().
> 
> If you (currently) want to not wait forever, you can use a callout, which 
> will post an event to the eventq after a certain time expires.
> 
>  while (1) {
>ev = os_eventq_get(_evq);
>switch (ev->ev_type) {
>case EVENT_T_DATA: /* read data from socket */
> recv_data();
>case EVENT_T_TIMER: /* timer expired */
> os_callout_reset(_callout, _evq, 20);
>}
>  }



Re: os_eventq_get() + timeout

2016-04-04 Thread will sanfilippo
Sounds good to me
+1

> On Apr 4, 2016, at 9:16 AM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> 
> 
>> On Apr 4, 2016, at 9:13 AM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> I would not break BC; I would add a different function. Not sure what I 
>> would call it but wouldnt it just have a timeout, in ticks, associated with 
>> it? For example: os_eventq_wait(_evq, timeout_in_os_ticks). What is the 
>> purpose of the mask btw? Something to do with returning an error if it times 
>> out or some way of selecting particular events?
> 
> Right, it would allow you to quickly poll for a set/type of events.  Select 
> is a common UNIX call which does the same thing (the other being poll()).
> 
> I think it's useful to search the queue for a type of event, so if we're 
> adding a new call with timeout, I think we should add that functionality. 
> 
> Sterling 



Re: First draft of Coding standards in develop branch

2016-04-26 Thread will sanfilippo

> On Apr 26, 2016, at 8:28 AM, Christopher Collins  wrote:
> 
> On Sun, Apr 24, 2016 at 10:08:09AM -0700, Sterling Hughes wrote:
>> Hi,
>> 
>> As we start to bring on new contributors, and operate as a project, its 
>> increasingly important that we document and agree upon coding standards. 
>>  I think we've done a good job of maintaining this consistency 
>> informally, but, now we need to vote and agree on project standards.
>> 
>> I've taken a first stab at this and committed it to the develop branch, 
>> folks can see it here:
>> 
>> https://github.com/apache/incubator-mynewt-core/blob/develop/CODING_STANDARDS.md
> 
> Thanks for putting this together Sterling.  I think it looks great.  My
> opinion is that a coding standards document should not be overly
> prescriptive.  Everyone has his own set of coding pet peeves; I suggest
> we try to keep those out of this document and keep it as short as
> possible.  Otherwise, people won't adhere to the document, or they will
> just hate writing code and they won't contribute as much.
> 
> For me, the important rules are:
>* Maximum line length
>* Brace placement
>* Typedefs
>* All-caps macros
>* Compiler extensions (e.g., packed structs).
> 
> The first three are already captured; I think the others should be
> addressed.  I think macros should always be in all-caps for reasons that
> everyone is probably familiar with. Unfortunately, I don't have a good
> rule for when extensions are acceptable.
> 
> I would also like to see a note about when it is OK to stray from the
> conventions.  There will be times (rarely) when adhering to the
> standards document just doesn't make sense.  "Zero-tolerance" rules
> always seem to pave the road to hell :).
+1. Even though I have preferences they are simply preferences. Specifying as 
little as possible that must be seems like a good idea.
> 
> Finally, there is one point in particular that I wanted to address:
> include guards in header files.  From the document:
> 
>* ```#ifdef``` aliasing, shall be in the following format, where
>the package name is "os" and the file name is "callout.h":
> 
>```no-highlight
>#ifndef _OS_CALLOUT_H
>#define _OS_CALLOUT_H
> 
> I am not sure I like this rule for the following two reasons:
>* A lot of the code base doesn't follow this particular naming
>  convention.
>* Identifiers starting with underscore and capital letter are
>  reserved to the implementation, so technically this opens the door
>  to undefined behavior.  
> 
> A leading capital E is also reserved by POSIX (e.g., EINVAL).  The
> naming convention I use is:
> 
>H_CALLOUT_
> 
> I would not consider this something to worry about, and I don't think we
> need to include a specific naming convention in the document.  However,
> insofar as we prescribe a naming convention, it should be one which
> avoids undefined behavior.
> 
> Thanks,
> Chris



Re: First draft of Coding standards in develop branch

2016-04-25 Thread will sanfilippo
Argh! I thought I had the stupid editor set to insert spaces for tabs. Dang! Oh 
well, at least you got the point :-)

* I would vote for macros all uppercase.
* I feel strongly about define alignment but not so much about alignment within 
structure definitions although I think I align things in structures generally.

Oh, some other good ones to discuss:
1) Should we allow initialization of local variables when they are defined. (my 
vote: no)
2) Should all local variables be defined at the beginning of the function? (my 
vote: yes)



> On Apr 25, 2016, at 8:48 PM, Sterling Hughes <sterl...@apache.org> wrote:
> 
> 
> 
> On 4/25/16 8:43 PM, will sanfilippo wrote:
>> Proposed Changes:
>> 
>> * A function prototype in a header file may (or should?) be a single line 
>> (i.e. type and name can be on same line).
>> * Global variables should be prefaced by g_
>> 
>> Comments:
>> * I dont see anything here regarding “alignment” of various things. Should 
>> we be adding these to the coding style? For example:
>> 
>> This:
>> #define PRETTY   (0)
>> #define VERY_PRETTY  (1)
>> #define BEAUTIFUL(2)
> 
> You used tabs here - so it shows unaligned in email :-), but I get the point 
> and agree.  I don't feel too strongly about '#define' alignment, but am happy 
> to add it, I do it anyway.
> 
>> 
>> Not:
>> #define UGLY (0)
>> #define REALLY_UGLY (1)
>> #define HIDEOUS (2)
>> 
>> — or —
>> 
>> This:
>> struct my_struct
>> {
>>  int ms_foo1;
>>  uint16_t ms_foo2;
>>  struct qelem elem;
>> }
>> 
>> Not:
>> struct my_struct
>> {
>>  int ms_foo1;
>>  uint16_tfoo2;
>>  struct qelemelem;
>> }
> 
> +1 for this one.
> 
>> 
>> Questions:
>> * I presume that outside code not written to this style will not be 
>> modified? For example, another open source project has code that we adopt.
> 
> We should add a note: follow the coding standards of the original source is 
> my perspective.
> 
>> * I presume that if not explicitly stated as “dont do” you can do it. For 
>> example, do all macros have to be uppercase? I can have either MY_MACRO(x) 
>> or my_macro(x)?
>> 
> 
> Within reason.  We can still make fun of particularly ugly code. :-)
> 
> On macros, what are people's sense?  I prefer to have _ALL_ my macros 
> uppercased, but I didn't put that in there.  I like to know what is a macro 
> (upper-case), vs what is a function.
> 
> Sterling



Re: First draft of Coding standards in develop branch

2016-04-26 Thread will sanfilippo

> On Apr 25, 2016, at 9:22 PM, p...@wrada.com wrote:
> 
> 
> My notes. Only a few strong opinions.
> 
> 0) Way to go Sterling.  Better sooner than later for this.
> 1) when I was writing the hal I wrote a hal/hal_adc.h (public API)  and
> mcu/hal_adc.h (private API name for the BSP to set MCU specific
> parameters). 
> I was burned because of the #ifndef __HAL_ADC_H__.   I replace them with
> __HAL_HAL_ADC_H__ and __MCU_HAL_ADC_H__.  Really, there was probably not
> a need to have the BSP include the public API, but I believe there was
> some 
> include path that surprised me.  If we are serious about using the
> directory 
> as a namespace, we had better include it in the header include protection.
> These types of errors can be hard to detect.
+ 1.
> 2) I¹m a fan of macros in upper case only.
> 3) Regarding typedefs, can they be used for static functions. Seems that
> this makes things readable and doesn¹t cause the harm you mention.
> 4) Should we be picky about the use of const?
Please god say no :-)
> 5) any rules or naming conventions for enums?
> 6) any guidelines on file names. Is there a name length limit.  Probably
> should make names all lower case. Should the file name and function name
> match for public functions? (e.g. hal_adc.h contains hal_adc_init() )
I thought this was mentioned?
> 7) Muti-line comments formatted like in our apache header.
> 8) Any convention on line break character \?
> 9) I think the 79 character line length is not really helpful.  I¹d rather
> see slightly longer lines.  I often prefer To use longer names for example
> int res = hal_adc_get_resolution_mvolts(padc) to make it clear what is
> going on and the units, but that may make lots of wrapping with an 80
> column limit.  This simple statement used 45 characters already.  I know
> its been standard forever, but screens are 5x wider than they used to be.
> Can¹t we stretch this to 120. I hope you are reading this email with
> 80 columns!!
Good luck getting others to change this :-) I would be fine personally.
> 10) any other comment info like author or date at the top of the file ?
> 11) It always bums me out to see opposite conventions on parenthesis for
> functions and other code blocks.  For example suppose I do this.
> 
> void
> Foo(int x) 
> {
>/* some code with a conditional code block */
>if (this or that) {
> /* some code in a separate block that is conditional */
>}
> 
>/* an unconditional code block with good reason */
>{
> uint8_t local_tmp_for_calc;
> /* do a computation with some local variable and make
>  * it clear it has limited scope */
>}
> 
>switch(condition) {
>case value:
>{
>  uint8_t a_temp_i_only_need_here;
>  /* do a computation with a local variable with limited
>   * scope */
>  Break;
>}
>}
> }
> 
> I get why we want to have that lingering parenthesis on the end of the
> if and switch to make the code more succinct, but it seems at odds with
> the other code blocks.  Maybe its my bad style, but I occasionally use
> code blocks in case statements and free-standing functions to do a local
> calculation with a variable that I want to make clear is only valid in a
> limited scope (as opposed to declaring it at the top of the function).
> This leaves my code looking inconsistent because the if and switch have
> one style code block and the case, free, and function have another.
It is just me personally, but the switch/case above is hard for me to read 
(because of the {} around the guts of the case).
So I would vote -1 for that.
> 
> 
> 
> 
> On 4/25/16, 8:48 PM, "Sterling Hughes" <sterl...@apache.org> wrote:
> 
>> 
>> 
>> On 4/25/16 8:43 PM, will sanfilippo wrote:
>>> Proposed Changes:
>>> 
>>> * A function prototype in a header file may (or should?) be a single
>>> line (i.e. type and name can be on same line).
>>> * Global variables should be prefaced by g_
>>> 
>>> Comments:
>>> * I dont see anything here regarding ³alignment² of various things.
>>> Should we be adding these to the coding style? For example:
>>> 
>>> This:
>>> #define PRETTY  (0)
>>> #define VERY_PRETTY (1)
>>> #define BEAUTIFUL   (2)
>> 
>> You used tabs here - so it shows unaligned in email :-), but I get the
>> point and agree.  I don't feel too strongly about '#define' alignment,
>> but am happy to add it, I do it anyway.
>> 
>>> 
>>> Not:
>>> #define UGLY (0)
>>> #define REALLY_UGLY (1)

Re: [DISCUSS] Release Apache Mynewt 0.9.0-incubating-rc1

2016-05-20 Thread will sanfilippo
Unfortunately we dont really support the native bsp for ble as of yet. I 
normally compiled it regularly but it was not used for any particular purpose 
so I had neglected it in this release (my bad).

I will make sure it compiles. In the future it would be nice to create a sim 
project that can simulate a device to some extent (to what extent would be 
interesting to get opinions on).

Will

> On May 19, 2016, at 7:31 AM, Christopher Collins  wrote:
> 
> On Thu, May 19, 2016 at 11:30:05AM +0200, Kevin Townsend wrote:
>> I'm running in 'develop' which may not be the right branch, but 
>> switching a bare bones BLE project to 'native' as a BSP generates this 
>> error:
>> 
>> $ newt build bleuart
>> Building target targets/bleuart
>> Compiling ble_ll_adv.c
>> Error: ble_ll_adv.c:24:22: fatal error: ble/xcvr.h: No such file or 
>> directory
>> compilation terminated.
>> 
>> Copying the header from here 
>> (https://github.com/apache/incubator-mynewt-core/tree/develop/net/nimble/drivers/nrf51/include/ble)
>>  
>> solves this but the file should probably exist in 
>> net/nimble/drivers/native as well, right?
>> 
>> If 'develop' corresponds to 0.9.0-rc1 I can submit a pull request if 
>> develop is the right branch for this?
> 
> Hi Kevin,
> 
> The develop branch should be identical to the release candidate, so any
> issues you are seeing also exist in 0.9.0-rc1.  Just for my own
> clarification, is the issue you describe new to 0.9.0-rc1?  I was under
> the impression that native support for the nimble controller
> has never worked, and that it has been on the todo list for a while.
> 
> If you have a fix for the compiler error, then I am sure a pull request
> would probably be welcome.  Will is more familiar with the nimble
> controller, so I think I will let him and others chime in.
> 
> Thanks,
> Chris



Re: [VOTE] Release Apache Mynewt 0.9.0-incubating-rc1

2016-05-20 Thread will sanfilippo
All:

I committed a fix for the native ble build. Basically had to add a bunch of 
stubs to the phy and also include the xcvr.h header file. If there are any 
other issues please let me know.

Will

> On May 19, 2016, at 7:38 AM, Kevin Townsend  wrote:
> 
> Hi Chris,
> 
> Sorry this may be an old issue then so feel free to ignore. I understand
> that native emulation of the BLE stack doesn't currently work and there are
> other priorities, but copying that one file at least allows me to build a
> basic project to test some custom shell commands and make sure the command
> parsing works as expected etc.
> 
> I'll wait for some feedback though to know what he plans are around nimble
> plus native mode. It was a 30 second issue to fix with my artificial test case
> today but maybe there are other issues I'm not aware of down the road since
> I haven't started digging into nimble yet.
> 
> K.



Re: [VOTE] Release Apache Mynewt 0.9.0-incubating-rc1

2016-05-20 Thread will sanfilippo
The vote is open for at least 72 hours and passes if a majority of at
least three +1 PPMC votes are cast.

[ X] +1 Release this package
[ ]  0 I don't feel strongly about it, but don't object
[ ] -1 Do not release this package because…

+1 binding

Will

Re: my newt on STM32091c-eval board

2016-05-19 Thread will sanfilippo
Hello David:

I took a peek at the evaluation board you mentioned. We dont have that eval 
board in house nor do we have that flavor of st chip in house, but getting 
mynewt up and running on this would certainly be possible.  It appears that 
this is the 256K Flash/32KB RAM version on that eval board. Is that correct? If 
so, that should be plenty of space for mynewt and a really killer app!

What you would need to do is to create a bsp for this board and add MCU support 
for it. I am not a huge fan of STM32Cube but that is the SDK that ST points to 
for this eval board so using that code for the HAL would be the quickest route 
if you wanted HAL for it. Of course, you dont need to support the HAL in the 
first cut (just pieces you needed).

We have tutorials on creating bsps and adding mcu support on the mynewt page so 
if you wanted to take a crack at adding support that would be great. We love 
feedback on the tutorials and we are always around to help if you have 
questions.


> On May 19, 2016, at 9:56 AM, David G. Simmons  wrote:
> 
> Is this something that is supported/possible? I don’t happen to have an 
> STM32F3DISCOVERY board, but I do happen to have one of these lying around. If 
> anyone has used this board, or knows haow to get it up and running with 
> mynewt, I’d appreciate some pointers/help.
> 
> Best regards,
> dg
> --
> David G. Simmons
> (919) 534-5099
> Web  • Blog  • 
> Linkedin  • Twitter 
>  • GitHub 
> /** Message digitally signed for security and authenticity.  
>  * If you cannot read the PGP.sig attachment, please go to 
>  * http://www.gnupg.com/  Secure your email!!!
>  * Public key available at keyserver.pgp.com 
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



Re: my newt on STM32091c-eval board

2016-05-19 Thread will sanfilippo
You are correct; adding a bsp and mcu support is not the first thing I would 
tackle either :-) Those boards are indeed cheap so getting one and trying 
mynewt out with a board that is already supported is definitely the path I 
would take.

I think once you get familiar with mynewt you will find that adding BSP/MCU 
support is fairly easy.

Let us know how it goes!

Will

> On May 19, 2016, at 10:49 AM, David G. Simmons <santa...@mac.com> wrote:
> 
> Thanks for the quick response Will!
> 
> I’m brand new to mynewt, so I'm not sure that dealing with this is 
> necessarily the first thing I should tackle. The STM32F3DISCOVERY board was 
> only $12 at DigiKey, so I just ordered one. I have a more pressing goal with 
> mynewt first, but once I get that underway and am more comfortable with it, I 
> will probably tackle this board. Ultimately I’m looking to get it running on 
> a (currently non-existent in the wild, but coming soon) M0-based SoC, so 
> creating a bsps for that chip will have to be done, but … baby steps!
> 
> Thanks!
> 
> dg
> 
>> On May 19, 2016, at 1:41 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> Hello David:
>> 
>> I took a peek at the evaluation board you mentioned. We dont have that eval 
>> board in house nor do we have that flavor of st chip in house, but getting 
>> mynewt up and running on this would certainly be possible.  It appears that 
>> this is the 256K Flash/32KB RAM version on that eval board. Is that correct? 
>> If so, that should be plenty of space for mynewt and a really killer app!
>> 
>> What you would need to do is to create a bsp for this board and add MCU 
>> support for it. I am not a huge fan of STM32Cube but that is the SDK that ST 
>> points to for this eval board so using that code for the HAL would be the 
>> quickest route if you wanted HAL for it. Of course, you dont need to support 
>> the HAL in the first cut (just pieces you needed).
>> 
>> We have tutorials on creating bsps and adding mcu support on the mynewt page 
>> so if you wanted to take a crack at adding support that would be great. We 
>> love feedback on the tutorials and we are always around to help if you have 
>> questions.
>> 
>> 
>>> On May 19, 2016, at 9:56 AM, David G. Simmons <santa...@mac.com> wrote:
>>> 
>>> Is this something that is supported/possible? I don’t happen to have an 
>>> STM32F3DISCOVERY board, but I do happen to have one of these lying around. 
>>> If anyone has used this board, or knows haow to get it up and running with 
>>> mynewt, I’d appreciate some pointers/help.
>>> 
>>> Best regards,
>>> dg
>>> --
>>> David G. Simmons
>>> (919) 534-5099
>>> Web <https://davidgs.com/> • Blog <https://davidgs.com/davidgs_blog> • 
>>> Linkedin <http://linkedin.com/in/davidgsimmons> • Twitter 
>>> <http://twitter.com/TechEvangelist1> • GitHub <http://github.com/davidgs>
>>> /** Message digitally signed for security and authenticity.
>>> * If you cannot read the PGP.sig attachment, please go to
>>> * http://www.gnupg.com/ <http://www.gnupg.com/> Secure your email!!!
>>> * Public key available at keyserver.pgp.com <http://keyserver.pgp.com/>
>>> **/
>>> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
>>> 
>>> There are only 2 hard things in computer science: Cache invalidation, 
>>> naming things, and off-by-one errors.
>>> 
>>> 
>> 
> 
> --
> David G. Simmons
> (919) 534-5099
> Web • Blog • Linkedin • Twitter • GitHub
> /** Message digitally signed for security and authenticity.
> * If you cannot read the PGP.sig attachment, please go to
> * http://www.gnupg.com/ Secure your email!!!
> * Public key available at keyserver.pgp.com
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



Re: Procedure for changing power level

2016-05-19 Thread will sanfilippo
The API takes a signed int as a parameter as I usually just like using ‘int’ as 
opposed to ‘int8_t’ in functions. However, the lower-layer phy driver stores 
the power as a signed, 8-bit integer as that is all that is needed at the lower 
layer (just as you say Kevin).

Certainly we can change the API to int8_t if folks think that is proper…

Will

> On May 19, 2016, at 11:17 AM, Christopher Collins  wrote:
> 
> On Thu, May 19, 2016 at 11:12:39AM -0700, James Howarth wrote:
>> Hi Chris,
>> 
>> I think it needs to be a signed int right, as txpwer can be negative, does
>> that sound right?
> 
> Yes, good catch (thanks also, Kevin!).
> 
> In that case, you should declare the tx power variable as an int8_t.
> 
> Chris



Re: Migrate clock source or PLL setup to the BSP level

2016-05-11 Thread will sanfilippo
Hello Kevin:

Well, I just woke up and havent had my coffee yet, so if this response makes no 
sense you know why :-)

I certainly agree that what you have run across here is an issue that we will 
have to deal with. Certainly the clock source and/or its availability needs to 
be dictated by the BSP. We have discussed how clocks get handled by the bsp and 
realize that  we need to do a bit of work on this. We also need to consider how 
the application developer gets to choose which clock source to use as 
(demonstrably) there will be multiple choices for a given BSP and (imo) it 
should be possible for the app to choose this (or at least define it in the 
bsp).

Anyway, I will add a jira ticket for this and we will address it in the 
upcoming release (or sooner). If you have any ideas/suggestions on how this 
should work in the system please let us know. We will discuss here on the dev 
list so please chime in.

Thanks!
Will

> On May 11, 2016, at 5:32 AM, Kevin Townsend  wrote:
> 
> Just as a follow up, I was able to get the code to run with the following 
> modifications on a nRF51 based board with no 32kHz XTAL present (i.e.: 
> https://www.adafruit.com/product/2267):
> 
>/* Turn on the LFCLK */
>NRF_CLOCK->XTALFREQ = CLOCK_XTALFREQ_XTALFREQ_16MHz;
>NRF_CLOCK->TASKS_LFCLKSTOP = 1;
>NRF_CLOCK->EVENTS_LFCLKSTARTED = 0;
>// KTOWN: Changed to simulated LF clock
>// NRF_CLOCK->LFCLKSRC = CLOCK_LFCLKSRC_SRC_Xtal;
>NRF_CLOCK->LFCLKSRC = CLOCK_LFCLKSRC_SRC_Synth;
>NRF_CLOCK->TASKS_LFCLKSTART = 1;
> 
>/* Wait here till started! */
>// KTOWN: Changed to simulated LF clock
>// mask = CLOCK_LFCLKSTAT_STATE_Msk | CLOCK_LFCLKSTAT_SRC_Xtal;
>mask = CLOCK_LFCLKSTAT_STATE_Msk | CLOCK_LFCLKSRC_SRC_Synth;
> 
> 
> On 11/05/16 14:04, Kevin Townsend wrote:
>> 
>> While working on a BSP based on the nRF51 SoC, I noticed what may be an 
>> issue with the current division of config data between the 'bsp' and the 
>> 'mcu' code.
>> 
>> For the nRF51 (and likely the nRF52, I haven't looked yet) the LF clock 
>> source is defined in 'mcu/nordic/nrf51xxx/src/hal_os_tick.c':
>> 
>>/* Turn on the LFCLK */
>>NRF_CLOCK->XTALFREQ = CLOCK_XTALFREQ_XTALFREQ_16MHz;
>>NRF_CLOCK->TASKS_LFCLKSTOP = 1;
>>NRF_CLOCK->EVENTS_LFCLKSTARTED = 0;
>>NRF_CLOCK->LFCLKSRC = CLOCK_LFCLKSRC_SRC_Xtal;
>>NRF_CLOCK->TASKS_LFCLKSTART = 1;
>> 
>> The XTAL is hard coded as a source, meaning that a 32kHz crystal must be 
>> present on the board for the LF clock to work. This is really a board level 
>> choice, though, and we have a number of boards in production that left the 
>> 32kHz crystal off to control costs at the tradeoff of slightly higher power 
>> consumption, simulating the LF clock with the mandatory 16MHz XTAL (which is 
>> a valid choice with the nRF51 SoC).
>> 
>> The real question, of course, is if clock source or PLL setup decisions 
>> (frequency, multipliers, etc.) should be defined in the 'mcu' or migrated to 
>> the 'bsp' level? Hard coding this without an easy override seems like an 
>> unnecessary restriction and will just push people to modify the global MCU 
>> code.
>> 
>> There is a small design challenge defining a generic interface for this, of 
>> course, since the clock or PLL config and setup is specific to individual 
>> silicon vendors and their own design decisions, but if there isn't already a 
>> mechanism to define clock source and setup at the BSP level (perhaps I 
>> simply missed it!), it's probably worth considering.
>> 
>> Before going to far into it though, perhaps there is already a mechanism I 
>> missed to the end.
>> 
> 



Re: privacy modes for LE

2016-05-13 Thread will sanfilippo
OK; I probably should not have used the term “expected”. It should up to the 
application as to whether or not it wants to change the random static address 
after a reboot.

I dont see a reason why we would always want to use a random address for active 
scans. Shouldnt that be up to the application as well?


> On May 13, 2016, at 2:45 PM, p...@wrada.com wrote:
> 
> That¹s what I thought as well, but I think we were mistaken
> 
> I believe the idea was so that tiny devices (like tile) can create an
> address without having to have manufacturing stuff happen.
> 
> It looked to me like both zephyr and soft device want to try to keep these
> random addresses forever.
> 
> Paul
> 
> On 5/13/16, 2:43 PM, "will sanfilippo" <wi...@runtime.io> wrote:
> 
>> Why exactly do you want to store the random, static address? My
>> understanding is that this is expected to change with rebootsŠ
>> 
>>> On May 13, 2016, at 1:59 PM, p...@wrada.com wrote:
>>> 
>>> I'm working on LE privacy modes.  I reviewed The soft device from
>>> nordic and also zephyr and have the following proposal
>>> 
>>> Privacy API proposal
>>> 
>>> 1.  a config for address mode
>>>*   Identity.
>>>*   NRPA
>>>*   RPA
>>> 2.  A address timeout to rotate NRPA/RPA
>>> 
>>> Initialization -
>>> The default mode will be Identity addressing:
>>> 
>>> *   If you are configured for identity address mode, the host code
>>> will try to get the identity address from the controller.  If it gets
>>> the identity address it will use it
>>> *   If that is unavailable it will checks its NV storage for a static
>>> private address.  If it gets one, it will use it.
>>> *   If that is not found, it will generate a static private address
>>> and store it in the NVRAM.
>>> 
>>> If you are configured for NRPA or RPA, they are used for all scans,
>>> advertising and connections. The host stack will use the controller
>>> for RPA generation and decoding.
>>> 
>>> The host will keep a non volatile key-cache for IRK that has a
>>> configurable size.  At boot, these will be loaded into the controller.
>>> Whenever a new key is retrieved in bonding, it will be added to the NVM
>>> and to the controller.
>>> 
>>> Comments please. I don't have a lot of experience here and any advice
>>> would be appreciated.
>>> 
>>> One observation.  The Zephyr stack seems to always use random addresses
>>> for active scans.  Do we want to do the same?
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
> 



Nimble controller configuration

2016-05-05 Thread will sanfilippo
Hello:

Unless there are any objections I am going to make the following changes to the 
nimble stack in regard to controller configuration.

The main reason for the change is to move the controller configuration options 
out of the controller and into nimble_opt.h with the rest of the configuration 
options (allowing them to be set by the target).

The other issue that will be addressed is the option NIMBLE_OPT_LL_MAX_PKT_SIZE 
and devices that cannot support this size due to HW limitations. Currently, 
this option dictates the supported max tx/rx octets used for connection data 
length pdu management. It is also intended to be used to reduce controller RAM 
usage although currently it affects RAM usage only slightly.

The proposed changes are:

* Move the configuration options out of net/nimble/controller/include/ble_ll.h 
and into net/nimble/include/nimble/nimble_opt.h
* Encryption will now be supported by default (as opposed to being on/off by 
default).
* Only features which affect RAM/code size will be added to nimble_opt.h; 
others will automatically be set by the controller.
* If the LL_MAX_PKT_SIZE is set to a value greater than that allowed by HW the 
HW will use the maximum value it supports.

Thanks





Re: BLE Tiny Unhandled exception

2016-05-05 Thread will sanfilippo
Hello:

Can you show me your targets? newt target show will do this. Also, can you 
describe the steps you used to build and load the image on the board? There are 
different ways to do this but this is what I did:

newt clean nrf52dk_bletiny
newt build nrf52dk_bletiny
newt create-image nrf52dk_bletiny 0.0.0
newt load nrf52dk_bletiny

After that gets loaded you can reset the board and bletiny should boot 
(assuming you already have the bootloader downloaded).

I have the latest off develop and I am not seeing the same issue. Here is the 
target I am building with (I show the bootloader target as well).

targets/nrf52dk_bletiny
app=apps/bletiny
bsp=hw/bsp/nrf52dk
build_profile=optimized
cflags=-DNIMBLE_OPT_SM=1 -DSTATS_NAME_ENABLE 
targets/nrf52dk_boot
app=apps/boot
bsp=hw/bsp/nrf52dk
build_profile=optimized



> On May 5, 2016, at 2:35 PM, Cody Smith  wrote:
> 
> Hey Everybody,
> 
> I've been working with the nRF52 development board PCA10040 and have had
> some issues running the bletiny example app that comes in the repo.  I
> switched to the dev branch of the repo to get the nrf52dk as opposed to the
> nrf52pdk, but that didn't solve this particular issue (although it did fix
> a few things).
> 
> I got the example code to build and load onto the board, but when it runs,
> I see the following output printed to the terminal window on my desktop
> (coming over the serial port in Tera Term) continuously.
> 
> :Unhandled interrupt (3), exception sp 0x20007fa0
> 0: r0:0x20007fa0  r1:0x200043a0  r2:0x0800  r3:0x0100
> 0: r4:0x200043a0  r5:0x0800  r6:0x0100  r7:0x200043c0
> 0: r8:0x  r9:0x00016047 r10:0x0501 r11:0x
> 0:r12:0x200043c0  lr:0x  pc:0x00016047 psr:0x0501
> 0:ICSR:0x0803 HFSR:0x4000 CFSR:0x8200
> 0:BFAR:0x0800 MMFAR:0x0800
> 
> The blinky app was running just fine.  I also see this same thing with the
> bletest app as well.  Any ideas on what this could be?
> 
> Thanks for the help,
> 
> Cody



Re: newtmgr protocol, no sequence number

2016-05-06 Thread will sanfilippo
Sorry for the late response… this looks good to me.
> On May 3, 2016, at 2:49 PM, marko kiiskila  wrote:
> 
> Hi,
> 
> I was going to add a sequence number to message header
> to match responses to requests. It would be better if we
> could detect responses to retransmitted requests, for example.
> 
> I was going steal one byte from nh_id field and have one
> byte worth of sequence number. I feel one byte being
> sufficient width, as I don’t think there would be that many
> outstanding requests at any given time. Just one or two.
> 
> I think keeping the header size as 8 bytes is valuable, as well as
> keeping it a multiple of 4 bytes.
> 
> Here’s the old header:
> struct nmgr_hdr {
>uint8_t nh_op;
>uint8_t nh_flags;
>uint16_t nh_len;
>uint16_t nh_group;
>uint16_t nh_id;
> };
> 
> Here is what the new header would look like:
> struct nmgr_hdr {
>uint8_t  nh_op; /* NMGR_OP_XXX */
>uint8_t  nh_flags;
>uint16_t nh_len;/* length of the payload */
>uint16_t nh_group;  /* NMGR_GROUP_XXX */
>uint8_t  nh_seq;/* sequence number */
>uint8_t  nh_id; /* message ID within group */
> };
> 
> This will break backwards compatibility, when requester starts
> filling in the sequence number.
> 
> Any objections, or other comments?
> 
> I was also going to add a CRC at the end of the message, when
> newtmgr goes over serial line. You can run this protocol over
> UARTs with no flow control, so we should detect errors on this.
> —
> M



Re: callout and callout_func

2016-05-06 Thread will sanfilippo
My vote would be #2 as well.


> On May 6, 2016, at 11:29 AM, marko kiiskila  wrote:
> 
> Hi,
> 
>> On May 5, 2016, at 10:47 AM, Sterling Hughes  wrote:
>> 
>> Salutations,
>> 
>> As I've been going through the callout implementation, one thing I've 
>> noticed is that callouts and callout_funcs can't be interleaved.
>> 
>> The implementation of a callout, is that it has an event as the first 
>> element of the structure.  When that event is posted to an event queue, it 
>> is posted with the event type EVENT_T_TIMER, which is reserved for callouts. 
>>  However, you must know a priori what type of callout it is, a callout, or a 
>> callout_func.
>> 
>> I don't think this behavior is ideal, and there are a couple of options for 
>> fixing it:
>> 
>> 1- Break out EVENT_T_TIMER into EVENT_T_TIMER (callout) and 
>> EVENT_T_TIMER_FUNC (callout_func).
>> 
>> 2- Remove the concept of callout, and just have callout_func. callout_func 
>> is by far the more useful of the two.
>> 
>> 3- Add a flags field to callout, which will tell you whether its a callout 
>> or a callout_func.
>> 
>> I'm leaning towards either #2 or #3 here, because I think the first one will 
>> end up being confusing when debugging things.  "Oh no, I put TIMER instead 
>> of TIMER_FUNC. GRR."  My personal preference is #2, but I'm not sure 
>> everyone wants to be forced to have a function per-timer in their task 
>> context.
>> 
>> Thoughts?
> 
> I would prefer #2, as that would simplify the concept.
> 
> Also, while you have that file cracked open, cf_arg from within 
> os_callout_func could be removed.
> os_callout includes os_event, and that structure already has a void * which 
> could be used as callout_func
> argument.
> —
> M



Re: BLE Tiny Unhandled exception

2016-05-06 Thread will sanfilippo
No problem; glad you got it working. If you have any other questions please 
dont hesitate to ask.

Are you planning on writing any BLE apps? Any feeback you have on the process 
would be great to hear...


> On May 6, 2016, at 9:01 AM, Cody Smith <cody.smi...@gmail.com> wrote:
> 
> Hey Will,
> 
> Thank you for your help.  My bletiny target looks the same as yours, except
> I don't have the -DNIMBLE_OPT_SM=1 cflag.  It appears to be working now.  I
> wasn't creating an image and I hadn't loaded the bootloader.  I'm not sure
> how I missed the information about loading and running the bootloader since
> I see it on the website clearly now.  Perhaps I was in a hurry to play with
> my new toy :)
> 
> In any case,  I was simply running:
> 
> newt clean nrf52dk_bletiny
> newt build nrf52dk_bletiny
> newt load nrf52dk_bletiny
> 
> Now I'm loading the bootloader and creating the bletiny image as you
> suggested.
> 
> Thanks for your time,
> 
> Cody
> 
> On Thu, May 5, 2016 at 4:10 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Hello:
>> 
>> Can you show me your targets? newt target show will do this. Also, can you
>> describe the steps you used to build and load the image on the board? There
>> are different ways to do this but this is what I did:
>> 
>> newt clean nrf52dk_bletiny
>> newt build nrf52dk_bletiny
>> newt create-image nrf52dk_bletiny 0.0.0
>> newt load nrf52dk_bletiny
>> 
>> After that gets loaded you can reset the board and bletiny should boot
>> (assuming you already have the bootloader downloaded).
>> 
>> I have the latest off develop and I am not seeing the same issue. Here is
>> the target I am building with (I show the bootloader target as well).
>> 
>> targets/nrf52dk_bletiny
>>app=apps/bletiny
>>bsp=hw/bsp/nrf52dk
>>build_profile=optimized
>>cflags=-DNIMBLE_OPT_SM=1 -DSTATS_NAME_ENABLE
>> targets/nrf52dk_boot
>>app=apps/boot
>>bsp=hw/bsp/nrf52dk
>>build_profile=optimized
>> 
>> 
>> 
>>> On May 5, 2016, at 2:35 PM, Cody Smith <cody.smi...@gmail.com> wrote:
>>> 
>>> Hey Everybody,
>>> 
>>> I've been working with the nRF52 development board PCA10040 and have had
>>> some issues running the bletiny example app that comes in the repo.  I
>>> switched to the dev branch of the repo to get the nrf52dk as opposed to
>> the
>>> nrf52pdk, but that didn't solve this particular issue (although it did
>> fix
>>> a few things).
>>> 
>>> I got the example code to build and load onto the board, but when it
>> runs,
>>> I see the following output printed to the terminal window on my desktop
>>> (coming over the serial port in Tera Term) continuously.
>>> 
>>> :Unhandled interrupt (3), exception sp 0x20007fa0
>>> 0: r0:0x20007fa0  r1:0x200043a0  r2:0x0800  r3:0x0100
>>> 0: r4:0x200043a0  r5:0x0800  r6:0x0100  r7:0x200043c0
>>> 0: r8:0x  r9:0x00016047 r10:0x0501 r11:0x
>>> 0:r12:0x200043c0  lr:0x  pc:0x00016047 psr:0x0501
>>> 0:ICSR:0x0803 HFSR:0x4000 CFSR:0x8200
>>> 0:BFAR:0x0800 MMFAR:0x0800
>>> 
>>> The blinky app was running just fine.  I also see this same thing with
>> the
>>> bletest app as well.  Any ideas on what this could be?
>>> 
>>> Thanks for the help,
>>> 
>>> Cody
>> 
>> 



Re: Proposed changes to Nimble host

2016-04-18 Thread will sanfilippo
Yeah, I can see why you chose OS_EVENT_TIMER. It is almost like we should 
rename that event type :-) But I agree with everything you say below; creating 
a new event type for this seems wasteful. I am not quite sure what you mean by 
"My concern there is that applications may want to add special handling for 
certain event types…”. Are you referring to the events that a package may 
require of an application?

Anyway, solving this generically is definitely what we need to do.

Will


> On Apr 18, 2016, at 10:06 AM, Christopher Collins <ccoll...@apache.org> wrote:
> 
> On Mon, Apr 18, 2016 at 09:43:35AM -0700, Christopher Collins wrote:
>> On Mon, Apr 18, 2016 at 09:18:16AM -0700, will sanfilippo wrote:
>>> For #2, my only “concerns” (if you could call them such) are:
>>> * Using OS_EVENT_TIMER as opposed to some other event. Should all
>>> OS_EVENT_TIMER events be caused by a timer? Probably no big deal… What
>>> events are going to be processed here? Do you envision many host
>>> events?
>> 
>> Yes, I agree.  I think a more appropriate event type would be
>> OS_EVENT_CALLBACK or similar.  I am a bit leery about adding a new OS
>> event type for this case, because it would require all applications to
>> handle an extra event type without any practical benefit.  Perhaps
>> mynewt could relieve this burden with an "os_handle_event()" function
>> which processes these generic events.  My concern there is that
>> applications may want to add special handling for certain event types,
>> so they wouldn't want to call the helper function anyway.
>> 
>> The OS events that the host would generate are:
>>* Incoming ACL data packets.
>>* Incoming HCI events.
>>* Expired timers.
> 
> (I meant "process", not "generate"!)
> 
> Oops... I went down a rabbit hole and forgot to address the main point
> :).  What we would *really* want here is something like:
>* BLE_HS_EVENT_ACL_DATA_IN
>* BLE_HS_EVENT_HCI_EVENT_IN
> 
> However, the issue here is that the event type IDs are defined in a
> single "number-space".  If the host package reserves IDs for its own
> events, then no other packages can use those IDs for its own events
> without a conflict.  The 8-bit ID space is divided into two parts:
> 
> 0 - 63: Core event types (TIMER, MQUEUE_DATA, etc.)
> 64+: Per-task event types.
> 
> So, the options for the host package are:
> 1. Reserve new core event IDs.  This avoids conflicts, but permanently
>   uses up a limited resource.
> 2. Use arbitrary per-task event IDs.  This has the potential for
>   conflicts, and doesn't strike me as a particularly good solution.
> 3. Use a separate host task.  This allows the host use IDs in the per-task
>   ID space without the risk of conflict.
> 4. Leverage existing core events.  This is what I proposed.  It avoids
>   conflicts and doesn't require any new event IDs, but it does feel a
>   bit hacky to use the TIMER event ID for something that isn't a timer.
> 
> I think this might be a common problem for other packages in the future.
> I don't think it is that unusual for a package to not create its own
> task, but still have the need to generate OS events.  So perhaps we
> should think about how to solve this general problem.
> 
> Chris



Re: Proposed changes to Nimble host

2016-04-18 Thread will sanfilippo
All sounds excellent!

+1 for #1. That only seems like a good thing.

For #2, my only “concerns” (if you could call them such) are:
* Using OS_EVENT_TIMER as opposed to some other event. Should all 
OS_EVENT_TIMER events be caused by a timer? Probably no big deal… What events 
are going to be processed here? Do you envision many host events?
* I wonder about the complexity of this from an application developers 
standpoint. Not saying that what you propose would be more or less complex; 
just something we should consider when making these changes.

On a side note (I guess it is related), we should consider how applications are 
going to initialize the host and/or the controller in regards to system memory 
requirements (i.e. mbufs). While our current methodology to create a BLE app is 
not rocket science, I think we could make it a bit simpler.


> On Apr 17, 2016, at 3:57 PM, Christopher Collins  wrote:
> 
> Hello all,
> 
> The Mynewt BLE stack is called Nimble.  Nimble consists of two packages:
>* Controller (link-layer) [net/nimble/controller]
>* Host (upper layers) [net/nimble/host]
> 
> This email concerns the Nimble host.  
> 
> As I indicated in an email a few weeks ago, the code size of the Nimble
> host had increased beyond what I considered a reasonable level.  When
> built for the ARM cortex-M4, with security enabled and the log level set
> to INFO, the host code size was about 48 kB.  In recent days, I came up
> with a few ideas for reducing the host code size.  As I explored these
> ideas, I realized that they open the door for some major improvements in
> the fundamental design of the host.  Making these changes would
> introduce some backwards-compatibility issues, but I believe it is
> absolutely the right thing to do.  If we do this, it needs to be done
> now while Mynewt is still in its beta phase.  I have convinced myself
> that this is the right way forward; now I would like to see what the
> community thinks.  As always, all feedback is greatly appreciated.
> 
> There are two major changes that I am proposing:
> 
> 1. All HCI command/acknowledgement exchanges are blocking.
> 
> Background: The host and controller communicate with one another via the
> host-controller-interface (HCI) protocol.  The host sends _commands_ to
> the controller; the controller sends _events_ to the host.  Whenever the
> controller receives a command from the host, it immediately responds
> with an acknowledgement event.  In addition, the controller also sends
> unsolicited events to the host to indicate state changes or to request
> information in a subsequent command.
> 
> In the current host, all HCI commands are sent asynchronously
> (non-blocking).  When the host wants to send an HCI command, it
> schedules a transmit operation by putting an OS event on its own event
> queue.  The event points to a callback which does the actual HCI
> transmission.  The callback also configures a second callback to be
> executed when the expected acknowledgement is received from the
> controller.  Each time the host receives an HCI event from the
> controller, an OS event is put on the host's event queue.  Processing of
> this OS event ultimately calls the configured callback (if it is an
> acknowledgement), or a hardcoded callback (if it is an unsolicited HCI
> event).
> 
> This design works, but it introduces a number of problems.  First, it
> requires the host code to maintain some quite complex state machines for
> what seem like simple HCI exchanges.  This FSM machinery translates into
> a lot of extra code.  There is also a lot of ugliness involved in
> canceling scheduled HCI transmits.
> 
> Another complication with non-blocking HCI commands is that they require
> the host to jump through a lot of hoops to provide feedback to the
> application.  Since all the work is done in parallel by the host task,
> the host has to notify the application of failures by executing
> callbacks configured by the application.  I did not want to place any
> restrictions on what the application is allowed to do during these
> callbacks, which means the host has to ensure that it is in a valid
> state whenever a callback gets executed (no mutexes are locked, for
> example).  This requires the code to use a large number of mutexes and
> temporary copies of host data structures, resulting in a lot of
> complicated code.
> 
> Finally, non-blocking HCI operations complicates the API presented to
> the application.  A single return code from a blocking operation is
> easier to manage than a return code plus the possibility of a callback
> being executed sometime in the future from a different task.  A blocking
> operation collapses several failure scenarios into a single function
> return.
> 
> Making HCI command/acknowledgement exchanges blocking addresses all of
> the above issues:
>* FSM machinery goes away; controller response is indicated in the
>  return code of the HCI send function.
>* 

Re: BLE_HS_ENOMEM when trying to connect to a second peripheral

2016-07-23 Thread will sanfilippo
Isnt it “NIMBLE_OPT_MAX_CONNECTIONS”. I see you have NIMBLE_OPT_MAX_CONNECTION 
(not plural).

> On Jul 23, 2016, at 4:05 PM, Marco Ferreira  wrote:
> 
> My target pkg.yml:
> 
> 
> 
> pkg.cflags:  
>   \- "-DNIMBLE_OPT_MAX_CONNECTION=8"  
> 
> 
> 
> 
> 
> Yes, ble_gap_connect() is returning the BLE_HS_ENOMEM.
> 
> 
> 
> 
> 
> \--
> 
> **Marco Ferreira**
> 
> Founder / CTO
> 
> ma...@altitude.co
> 
> 
> 
> [www.altitude.co](https://link.nylas.com/link/d8alirm8yqo9t58fef8uienha/local-
> e97a983c-
> 1ecf/0?redirect=http%3A%2F%2Fwww.altitude.co%2F=ZGV2QG15bmV3dC5pbmN1YmF0b3IuYXBhY2hlLm9yZw==)
>   
> 
> 
> 
> DISCLAIMER: The information contained in this e-mail is confidential and may
> be legally privileged. If the reader of this message is not the intended
> recipient you are hereby notified that any use, dissemination, distribution,
> or reproduction of this message is prohibited. If you have received this
> message in error please forward this message to
> [t...@altitude.co](mailto:t...@altitude.co) and delete all copies of this
> message.
> 
> 
> 
> ![](https://link.nylas.com/open/d8alirm8yqo9t58fef8uienha/local-e97a983c-
> 1ecf?r=ZGV2QG15bmV3dC5pbmN1YmF0b3IuYXBhY2hlLm9yZw==)
> 
> On Jul 23 2016, at 7:43 pm, Christopher Collins ccoll...@apache.org
> wrote:  
> 
>> On Sat, Jul 23, 2016 at 09:32:59PM +, Marco Ferreira wrote:  
>>  Here's the complete config in my code:  
>>   
>>   
>>   
>>  /* Configure the host. */  
>>  cfg = ble_hs_cfg_dflt;  
>>  cfg.max_hci_bufs = 16;  
>>  // cfg.max_connections = MAX_CONNECTIONS*3;  
>>  // cfg.max_gattc_procs = 5;  
>>  cfg.max_l2cap_chans = cfg.max_connections * 3;  
>>  // cfg.max_l2cap_sig_procs = 1;  
>>  // cfg.sm_bonding = 1;  
>>  // cfg.sm_our_key_dist = BLE_SM_PAIR_KEY_DIST_ENC;  
>>  // cfg.sm_their_key_dist = BLE_SM_PAIR_KEY_DIST_ENC;  
>>  // cfg.store_read_cb = ble_store_ram_read;  
>>  // cfg.store_write_cb = ble_store_ram_write;  
>>   
>>  /* Populate config with the required GATT server settings. */  
>>  // cfg.max_attrs = 0;  
>>  // cfg.max_services = 0;  
>>  // cfg.max_client_configs = 0;
> 
>> 
> 
>> Hmm, those settings look fine to me. Are you sure  
>> -DNIMBLE_OPT_MAX_CONNECTIONS is set to 2? Are you setting it as a target  
>> variable in the target's pkg.yml file?
> 
>> 
> 
>> Also, just to confirm, is is ble_gap_connect() that is returning  
>> BLE_HS_ENOMEM?
> 
>> 
> 
>> Thanks, Chris
> 



Re: 3rd party SDKs, and Interrupt vectors on NRF52

2016-07-29 Thread will sanfilippo
One thing I didnt mention earlier was that some cortex-M processors do have a 
VTOR register which can be used to relocate the vector table. The register has 
some “odd” alignment restrictions but those are not all that tough to deal 
with. This would be one way to deal with the bootloader issue but not all 
cortex-M processors have a VTOR.

I guess in the end it will be up to folks when they create their BSP to decide 
what they want. It shouldnt be too difficult to modify the code/linker scripts 
to accommodate both approaches.

> On Jul 27, 2016, at 7:43 AM, Kevin Townsend <ke...@adafruit.com> wrote:
> 
> Hi Will,
> 
> That's a good point. I've usually worked with very simple boot loaders in
> the past. One of my biggest problems with the SD architecture (and
> motivation for moving to mynewt and nimble) was the lack of strict timing
> and direct interrupt control at the app level due to the interrupt handling
> in the SD.
> 
> But one or two clear examples of how to register your interrupt handler
> should probably be clearly documented with the current approach then since
> it differs from what most customers are used to(?) with CMSIS and most
> vendor supplied example code.
> 
> I don't have strong feelings either way myself though and the bootloader
> argument is valid if you want to handle systick interrupts in both chunks
> of code for example or radio and SPI events in an advanced bootloader.
> 
> K.
> 
> Le mercredi 27 juillet 2016, will sanfilippo <wi...@runtime.io> a écrit :
> 
>> So how does a bootloader that uses interrupts and an application that uses
>> the same interrupt work? If you have interrupt vectors in .text, only one
>> “application” can own that interrupt vector. Of course, the interrupt
>> vector in . text can look up a function pointer to call, but in that case
>> you might as well just register the interrupt vector.
>> 
>> 
>>> On Jul 26, 2016, at 7:45 PM, Kevin Townsend <ke...@adafruit.com
>> <javascript:;>> wrote:
>>> 
>>> 
>>>> Bonsoir,
>>> Bilingual, impressive! :P
>>>> I’m OK with NVIC_SetVector(), and indeed, the driver that uses Nordic’s
>> function can call that in the driver init function — however, I think it’s
>> worth understanding why we want this located in RAM.  It seems reasonable
>> to me to fix the interrupt vectors in .text and avoid the NVIC_SetVector()
>> call — it seems like we’re spending a bunch of RAM on something that will
>> never change dynamically.
>>>> 
>>>> Thoughts?
>>> Personally, I can't remember the last time I had the interrupt vectors
>> anywhere except fixed in .text.  I suspect 99% of the people working with
>> ARM Cortex devices are also familiar with and probably are sort of
>> expecting the typical 'weak' approach pushed by most CMSIS based vendor
>> startup code. That said ... I certainly don't feel strongly about this
>> either way, but you probably are losing a small chunk of SRAM for something
>> 0.1% of users might actually make meaningful use of.
>>> 
>>> Kevin
>> 
>> 



Re: 3rd party SDKs, and Interrupt vectors on NRF52

2016-07-27 Thread will sanfilippo
So how does a bootloader that uses interrupts and an application that uses the 
same interrupt work? If you have interrupt vectors in .text, only one 
“application” can own that interrupt vector. Of course, the interrupt vector in 
. text can look up a function pointer to call, but in that case you might as 
well just register the interrupt vector.


> On Jul 26, 2016, at 7:45 PM, Kevin Townsend  wrote:
> 
> 
>> Bonsoir,
> Bilingual, impressive! :P
>> I’m OK with NVIC_SetVector(), and indeed, the driver that uses Nordic’s 
>> function can call that in the driver init function — however, I think it’s 
>> worth understanding why we want this located in RAM.  It seems reasonable to 
>> me to fix the interrupt vectors in .text and avoid the NVIC_SetVector() call 
>> — it seems like we’re spending a bunch of RAM on something that will never 
>> change dynamically.
>> 
>> Thoughts?
> Personally, I can't remember the last time I had the interrupt vectors 
> anywhere except fixed in .text.  I suspect 99% of the people working with ARM 
> Cortex devices are also familiar with and probably are sort of expecting the 
> typical 'weak' approach pushed by most CMSIS based vendor startup code. That 
> said ... I certainly don't feel strongly about this either way, but you 
> probably are losing a small chunk of SRAM for something 0.1% of users might 
> actually make meaningful use of.
> 
> Kevin



Re: Assert failed in ble_ll_hci.c:999

2016-07-07 Thread will sanfilippo
There are a few of them actually. Note that we returned an error code when the 
block being freed was not part of the memory pool or either the block or pool 
is NULL. There was some debate on whether to return an error in the latter two 
cases, but not any debate about the former.

What does being part of the pool mean? It means that the address of the block 
being freed is within the memory range of the memory pool and also that the 
address is on a “block boundary”. For example, say you have 100, 20 byte memory 
blocks and the memory block is at address 0x1000. If you attempt to free 
something that is at an address that is less than 0x1000 or greater than 0x1000 
+ (100*20) an error will be returned. Also, if the address is not on a 20-byte 
boundary (starting from the beginning address of the memory pool) an error will 
be returned.

Will

NOTE: the “true block size” is based on the value of OS_ALIGNMENT. Currently 
this is 4 for our architectures but if you change it the block size may change 
as all memory blocks are padded to “OS_ALIGNMENT” boundaries (for example, you 
allocate a 21 byte block, you get a 24 byte block).

> On Jul 7, 2016, at 5:52 PM, Simon Ratner  wrote:
> 
> I've been seeing this semi-regularly lately, can't make sense of it.
> 
>66490:Assert ; failed in ble_ll_hci.c:999
>66490:Unhandled interrupt (2), exception sp 0x20001788
>66490: r0:0x  r1:0x2000179c  r2:0x8000  r3:0xe000ed00
>66490: r4:0x  r5:0x03e7  r6:0x00021fec  r7:0x20001823
>66490: r8:0x  r9:0x r10:0x1fff8000 r11:0x
>66490:r12:0x  lr:0xe2fd  pc:0x0001fd60 psr:0x8100
>66490:ICSR:0x00411002
> 
> The line points to
> https://github.com/apache/incubator-mynewt-core/blob/develop/net/nimble/controller/src/ble_ll_hci.c#L998
> (I am on the tip of develop). Under what circumstances would returning a
> memblock to the pool fail?
> 
> Cheers,
> simon



Re: Assert failed in ble_ll_hci.c:999

2016-07-07 Thread will sanfilippo
It is a bug in our stack. We are going to fix it soon so you should see an 
email that discusses the issue and fix. Thanks for catching this!

Will
> On Jul 7, 2016, at 6:37 PM, Simon Ratner <si...@proxy.co> wrote:
> 
> So, the assert being where it is, would your bet be on a controller bug, a
> host bug, or the app corrupting os memory in some way? (I've certainly done
> my fair share of the latter ;)
> 
> 
> 
> On Thu, Jul 7, 2016 at 6:06 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> There are a few of them actually. Note that we returned an error code when
>> the block being freed was not part of the memory pool or either the block
>> or pool is NULL. There was some debate on whether to return an error in the
>> latter two cases, but not any debate about the former.
>> 
>> What does being part of the pool mean? It means that the address of the
>> block being freed is within the memory range of the memory pool and also
>> that the address is on a “block boundary”. For example, say you have 100,
>> 20 byte memory blocks and the memory block is at address 0x1000. If you
>> attempt to free something that is at an address that is less than 0x1000 or
>> greater than 0x1000 + (100*20) an error will be returned. Also, if the
>> address is not on a 20-byte boundary (starting from the beginning address
>> of the memory pool) an error will be returned.
>> 
>> Will
>> 
>> NOTE: the “true block size” is based on the value of OS_ALIGNMENT.
>> Currently this is 4 for our architectures but if you change it the block
>> size may change as all memory blocks are padded to “OS_ALIGNMENT”
>> boundaries (for example, you allocate a 21 byte block, you get a 24 byte
>> block).
>> 
>>> On Jul 7, 2016, at 5:52 PM, Simon Ratner <si...@proxy.co> wrote:
>>> 
>>> I've been seeing this semi-regularly lately, can't make sense of it.
>>> 
>>>   66490:Assert ; failed in ble_ll_hci.c:999
>>>   66490:Unhandled interrupt (2), exception sp 0x20001788
>>>   66490: r0:0x  r1:0x2000179c  r2:0x8000  r3:0xe000ed00
>>>   66490: r4:0x  r5:0x03e7  r6:0x00021fec  r7:0x20001823
>>>   66490: r8:0x  r9:0x r10:0x1fff8000 r11:0x
>>>   66490:r12:0x  lr:0xe2fd  pc:0x0001fd60 psr:0x8100
>>>   66490:ICSR:0x00411002
>>> 
>>> The line points to
>>> 
>> https://github.com/apache/incubator-mynewt-core/blob/develop/net/nimble/controller/src/ble_ll_hci.c#L998
>>> (I am on the tip of develop). Under what circumstances would returning a
>>> memblock to the pool fail?
>>> 
>>> Cheers,
>>> simon
>> 
>> 



Re: Read rssi of established connection

2016-07-05 Thread will sanfilippo
A comment regarding RSSI: all frames received within a connection event will 
have an RSSI measurement. We only count valid CRC frames. The RSSI applies to 
empty pdu’s as well so the RSSI will get updated as long as the connection is 
valid. We also dont average the RSSI in any way; the last data channel pdu 
received will update the RSSI in the connection.

Will

PS I hope the term “Data Channel PDU” does not confuse folks. All that means is 
a packet sent on a data channel as opposed to an advertising channel. Thus, LL 
control PDU’s, empty pdu’s, and PDU’s with actual user data in them will all 
update the RSSI in the connection.

> On Jul 2, 2016, at 6:54 PM, Christopher Collins  wrote:
> 
> On Sat, Jul 02, 2016 at 10:18:03AM -0700, Simon Ratner wrote:
>> Correct, the return value is 4, and the out param remains unchanged.
>> 
>> I am testing on nrf51; have tried calling it both directly from the
>> EVENT_CONNECT callback, as well as some time later, just in case it was
>> state-related. For the record, I am fairly certain it used to work
>> not-too-long ago, so perhaps this is a recent breakage?
> 
> Bummer.  I will check it out.  Thanks for the heads up.
> 
>> In the descriptor is perfectly fine.
> 
> After thinking about it some more, I think it might be best to have a
> dedicated function for querying the RSSI rather than putting it in the
> connection descriptor.  The operation requires communication with the
> controller, and I think there is some value in isolating
> host-controller-communication when possible.
> 
>> For my own edification, is there a heartbeat of some sort while the
>> connection is open, or is the value just representative of the last
>> packet seen? Or a one-off value at time of establishment?
> 
> (Will, please chime in if I am talking nonsense)
> 
> It is the RSSI of the most recently received data packet.  If the peer
> isn't sending any data, then the RSSI value won't get updated.
> 
> Chris



Re: Things I'd like to see

2016-07-11 Thread will sanfilippo
I have mixed feelings about comments. In my view, it is OK to not comment the 
code heavily if there is a document that explains the code. Either is 
sufficient in my opinion. Of course, keeping to Doxygen style comments for 
public API is a good idea. Do we run doxygen automatically and can we see what 
the output looks like for mynewt? I generally use doxygen style comments for 
all of my functions but I have to admit I am not always consistent.

The other thing about comments and documentation: it is not easy to keep them 
in sync with the actual code. People change things and then the 
comments/documents get out of sync. While this is not a reason to not 
document/comment, it can sometimes be worse than having no 
comments/documentation.

The issue is always about enforcement; I think we need to have a conversation 
about how (and if we should) we enforce adherence to the coding standards we 
create.

Will

> On Jul 11, 2016, at 7:58 AM, Christopher Collins  wrote:
> 
> On Thu, Jul 07, 2016 at 04:19:33PM -0400, David G. Simmons wrote:
>> As I’m working through all the mynewt code, something I’d love to see
>> more of are comments in the code describing what’s going on, etc. I
>> admit to not being the best at commenting my code — I’m working on it
>> — but it would be really helpful, especially as more contributors join
>> the party, to have well documented code so newbies like me can get up
>> to speed on what the code is actually doing more quickly.
>> 
>> This would have the adde dbenefit of allowing us to use something like
>> Doxygen to auto-generate documentation on the code in a more
>> human-readable form.
>> 
>> What do others think of implementing some code-documentation
>> standards?
> 
> We definitely need to do a better job with comments. The coding
> standards document contains this clause:
> 
>All public APIs should be commented with Doxygen style comments
>describing purpose, parameters and return values.  Private APIs need
>not be documented.
> 
> Do you think the language needs to be stronger or more specific?
> 
> Thanks,
> Chris



OS_TICKS_PER_SEC

2016-07-01 Thread will sanfilippo
Hello:

Recently there has been some discussion amongst some of us regarding the value 
of OS_TICKS_PER_SEC.

There are two things I want to bring up here. The first is that the default is 
1000 (or 1024 in some cases). The second is where this is defined.

Typically, most RTOS’s really dont need a 1 millisecond tick; that is pretty 
fast. If this were to change to a 10 msec tick (100 ticks per second; would be 
128 for devices using a 32.768 kHz crystal) would anyone object?

Related to the above issue is where we define OS_TICKS_PER_SEC. In my opinion 
where it is placed now is not optimal; it is in the hw/mcu directories. What 
this means is that any application/target being built will have to live with 
the OS_TICKS_PER_SEC defined in the MCU directory. It seems to me that a more 
appropriate place would be in the application or as a target variable. I guess 
we could have put this in the BSP as well but not quite sure it belongs there 
either. It could be that there is simply no great solution for this...

Anyway, unless I hear objections, I will be changing the OS_TICKS_PER_SEC for 
the nrf51 and nrf52. Other mcu’s may get changed later.

Thanks

Will




Creating branch for 1.0.0 beta2 release

2017-02-01 Thread will sanfilippo
Hello:

Just a heads up. I am going to create the 1.0.0 beta 2 release branch.


Re: BLE HCI support on NRF52DK

2017-02-03 Thread will sanfilippo
Hi Andrzej

Thanks for pointing me to Vol 2 Part E, Section 4.4. I was recalling a section 
of the spec that talked about this but could not find it when I sent this 
email. Thus, I completely agree that the controller sending a NOOP does not in 
any way indicate that it reset. It is fine if the controller does send a NOOP, 
but the host cannot use that as an indication that the controller reset. That 
does make things a bit tricky though as you mention, but hopefully if something 
is really badly out of sync the host will figure it out and reset the 
controller.

I was also thinking of the following scenario which I should have explained a 
bit better. If the controller is powered off, it is not driving the flow 
control line so I am not sure what would happen HW wise in this case. It could 
be that the flow control line is floating, and therefore the host could see it 
in various states. Therefore, I would suspect that when a host issues a HCI 
Reset and does not get a response for some amount of time, it just keeps 
issuing the HCI Reset until it gets a response.

Given that a controller can send a NOOP on power up, I cant see how we can 
guarantee that the following will NOT happen:

* Host sends HCI Reset
* Controller sends NOOP
* Controller sends Command Complete w/Reset opcode

I can also see this happening:

* Host sends HCI Reset
* Controller sends NOOP
* Nothing else happens

I certainly agree that once the controller actively takes control of the flow 
control line it should honor the HCI Reset although I still see the possibility 
of the two scenarios described above happening.

Regarding HW Error: that is something we can do in the controller as we can 
look at the reason why the device reset and send a HW error event.


> On Feb 3, 2017, at 12:12 PM, Andrzej Kaczmarek 
> <andrzej.kaczma...@codecoup.pl> wrote:
> 
> Hi Will,
> 
> On Fri, Feb 3, 2017 at 7:08 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> I might be getting a bit confused here so hopefully I am making some
>> sense. I seem to recall some discussion around this in the past but I cant
>> recall :-) Anyway...
>> 
>> It is my understanding that the first thing a controller should do when it
>> powers up is send a NOOP. Looking at the Core V4.2 Spec, Vol 6 Part D
>> Section 2 you can see a message sequence chart that shows this. It sounds
>> like folks think MyNewt is different than other controllers in this
>> respect. If so, we can change that behavior, but it makes sense to me to do
>> this, as it will inform the host that the controller has powered up.
>> 
> 
> The section you quote is only informative (see section 1.1 of the same
> part) and the diagram is only one of possibilities. The actual requirement
> is in Vol 2, Part E, Section 4.4 which states that after power up host is
> allowed to send up to 1 outstanding command so 1 credit is assumed here.
> Also controller does not need to send noop, but it is also not an error to
> do so.
> 
> Of course, there is a chicken and egg problem here. If the controller is
>> not powered up and the host sends a HCI Reset, the host is obviously not
>> going to get a response. I am also not sure one can trust the flow control
>> lines if the board is not powered up but one would hope that RTS/CTS are
>> pulled the proper way if the controller is not powered.
>> 
> 
> I guess host can assume that CTS/RTS lines work properly, otherwise there
> is no way to detect when controller is ready to receive something (i.e. is
> attached).
> 
> 
>> Certainly, an interesting issue with the MyNewt HCI firmware would be the
>> order in which the UART gets initialized and when the LL is initialized. In
>> the end, I dont think it should really matter, as the host should have to
>> deal with the controller not being ready to receive the HCI Reset.
>> 
> 
> My understanding of spec section I mentioned is that controller should be
> always ready to receive HCI Reset after power up. If it is not, then flow
> control on transport layer should not be enabled.
> 
> 
>> Here are the basic scenarios and what I would expect:
>> 
>> 1. Controller powers up first and host is not powered or not ready
>> * Controller issues NOOP but host does not see it.
>> * Host wakes up and sends HCI Reset.
>> * Host gets Command Complete (with proper opcode) and all is well
>> 
> 
> Agree.
> 
> 2. Host powers up first and controller powers up some time later
>> * Host sends HCI Reset but gets no response.
>> * Host sits in a loop, sending HCI Resets periodically.
>> * If Host gets a NOOP, it knows that the controller has powered up. In
>> this case, the host should issue HCI Reset and should get a Command
>> Complete.
>&g

[DISCUSS] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-06 Thread will sanfilippo
Hi all,

This thread is for any and all discussion regarding the release of
Apache Mynewt 1.0.0-b2-incubating-rc1.  All feedback is welcome.

Thanks,
Will


[VOTE] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-06 Thread will sanfilippo
Hello all,
I am pleased to be calling this vote for the source release of Apache
Mynewt 1.0.0, beta 2.

Apache Mynewt is a community-driven, permissively licensed open source
initiative for constrained, embedded applications. Mynewt provides a
real-time operating system, flash file system, network stacks, and
support utilities for real-world embedded systems.

For full release notes, please visit the Apache Mynewt Wiki:
https://cwiki.apache.org/confluence/display/MYNEWT/Release+Notes

This release candidate was tested as follows:
   1. Manual execution of the Mynewt test plan:
  https://cwiki.apache.org/confluence/display/MYNEWT/Apache+Mynewt+Test+Plan
  The test results can be found at:
  https://cwiki.apache.org/confluence/display/MYNEWT/1.0.0-b2+Test+Results
   2. The full unit test suite for this release was executed via "newt
  test all" with no failures.  This testing was performed on the
  following platforms:
* OS X 10.10.5
* Linux 4.4.6 (Gentoo)

The release candidate to be voted on is available at:
https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/

The commits under consideration are as follows:
blinky:
   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-blinky
   commit a69b409197a845bc75748af564cb08c4ec7701d4
core:
   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-core
   commit de35d2337189a69d97aa3fdccc4f7bfaeb31efc9
newt:
   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-newt
   commit fdac74ff83f21a11c7fbaa2e1adc2d50cbf1e612

In addition, the following newt convenience binaries are available:
   linux: 
https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-linux-1.0.0-b2-incubating.tgz
   osx: 
https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-osx-1.0.0-b2-incubating.tgz

The release candidate is signed with a GPG key available at:
https://dist.apache.org/repos/dist/dev/incubator/mynewt/KEYS

The vote is open for at least 72 hours and passes if a majority of at
least three +1 PPMC votes are cast.
[ ] +1 Release this package
[ ]  0 I don't feel strongly about it, but don't object
[ ] -1 Do not release this package because…

Anyone can participate in testing and voting, not just committers,
please feel free to try out the release candidate and provide your
votes.

A separate [DISCUSS] thread will be opened to talk about this release
candidate.

Thanks,
Will

Re: sysint() fails

2017-02-08 Thread will sanfilippo
David:

It seems like, from this email, that things are now working for you. Are you 
still going to vote -1 or are you going to change your vote?


> On Feb 8, 2017, at 5:33 AM, David G. Simmons  wrote:
> 
> 
>> On Feb 7, 2017, at 2:38 PM, marko kiiskila  wrote:
>> 
>> can you get a backtrace of that crash?
> 
> Sorry, I was not able to get a backtrace ... my shell history didn't go back 
> far enough and I've been playing around with stuff for hours. 
> 
>> 
>> Develop branch and the 1.0.0 beta2 release branches have diverged a bit, so 
>> we
>> should see what this assert() is about.
> 
> I did get the 1.0.0B2 branch installed, and things seem to be better ... at 
> least with the bundled apps. I *did* finally have to completely erase the 
> chip and start over before it all went away.
> 
>> One issue I ran across a month back with nrf52 and sys/reboot package. The 
>> flash area
>> containing FCB was holding some other data. This was causing fcb_init() on 
>> that region to
>> return non-zero. Thereby causing sys/reboot package init to assert() during 
>> sysinit().
>> I think I had been playing around with boot loader with was bigger in size, 
>> and
>> had trailing part of my big bootloader in that area.
>> 
>> The way I sorted that out was by erasing the flash, and then reinstalled 
>> bootloader
>> and my app again.
> 
> I will try this as I'm seeing the ADC malfunctioning and getting the same 
> error
> __assert_func (file=file@entry=0x0, line=line@entry=0, func=func@entry=0x0, 
> e=e@entry=0x0) at 
> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:125
> 125  asm("bkpt");
> from an assert() 
> 
> ...
> 
> Forgot to hit send yesterday ... And I found the culprit here as well. 
> 
> 
> 
> --
> David G. Simmons
> (919) 534-5099
> Web • Blog • Linkedin • Twitter • GitHub
> /** Message digitally signed for security and authenticity.  
> * If you cannot read the PGP.sig attachment, please go to 
>  * http://www.gnupg.com/ Secure your email!!!
>  * Public key available at keyserver.pgp.com
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



Re: [VOTE] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-07 Thread will sanfilippo
> [X ] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because…
> 
 +1 (binding)

> Hello all,
> I am pleased to be calling this vote for the source release of Apache
> Mynewt 1.0.0, beta 2.
> 
> Apache Mynewt is a community-driven, permissively licensed open source
> initiative for constrained, embedded applications. Mynewt provides a
> real-time operating system, flash file system, network stacks, and
> support utilities for real-world embedded systems.
> 
> For full release notes, please visit the Apache Mynewt Wiki:
> https://cwiki.apache.org/confluence/display/MYNEWT/Release+Notes
> 
> This release candidate was tested as follows:
>   1. Manual execution of the Mynewt test plan:
>  
> https://cwiki.apache.org/confluence/display/MYNEWT/Apache+Mynewt+Test+Plan
>  The test results can be found at:
>  https://cwiki.apache.org/confluence/display/MYNEWT/1.0.0-b2+Test+Results
>   2. The full unit test suite for this release was executed via "newt
>  test all" with no failures.  This testing was performed on the
>  following platforms:
>* OS X 10.10.5
>* Linux 4.4.6 (Gentoo)
> 
> The release candidate to be voted on is available at:
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/
> 
> The commits under consideration are as follows:
> blinky:
>   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-blinky
>   commit a69b409197a845bc75748af564cb08c4ec7701d4
> core:
>   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-core
>   commit de35d2337189a69d97aa3fdccc4f7bfaeb31efc9
> newt:
>   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-newt
>   commit fdac74ff83f21a11c7fbaa2e1adc2d50cbf1e612
> 
> In addition, the following newt convenience binaries are available:
>   linux: 
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-linux-1.0.0-b2-incubating.tgz
>   osx: 
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-osx-1.0.0-b2-incubating.tgz
> 
> The release candidate is signed with a GPG key available at:
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/KEYS
> 
> The vote is open for at least 72 hours and passes if a majority of at
> least three +1 PPMC votes are cast.
> [ ] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because…
> 
> Anyone can participate in testing and voting, not just committers,
> please feel free to try out the release candidate and provide your
> votes.
> 
> A separate [DISCUSS] thread will be opened to talk about this release
> candidate.
> 
> Thanks,
> Will



Re: [DISCUSS] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-07 Thread will sanfilippo
The mynewt binary that was committed was built with go version 1.6. There are 
issues with running go binaries on newer versions of mac osx if they were built 
with 1.6 so that is the reason it is crashing (I suspect). You need go 1.7 if 
you are running macOS 10.12 Sierra.

> On Feb 7, 2017, at 10:01 AM, marko kiiskila <ma...@runtime.io> wrote:
> 
> Hi,
> 
> should the NOTICE files be updated with 2017?
> Looks like blinky and newt still have copyright from 2015-2016.
> Core has it from 2015-2017.
> 
> Verified signatures. Those check out.
> 
> Checked the binaries for OSX and Linux, these seem to be mostly ok.
> newt binary for OSX is giving me occasional crash; never in a repeatable
> spot though. binary for Linux is working just fine, and newt on OSX works
> without issues when I build it from source.
> Version for newt is ok.
> 
> I can build and run blinky on both Linux and Mac.
> 
>> On Feb 6, 2017, at 5:35 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> Hi all,
>> 
>> This thread is for any and all discussion regarding the release of
>> Apache Mynewt 1.0.0-b2-incubating-rc1.  All feedback is welcome.
>> 
>> Thanks,
>> Will
> 



Re: sysint() fails

2017-02-07 Thread will sanfilippo
Hello David:

I did not attempt to re-test all the apps you mentioned below, but bletiny on 
the nrf52dk is working just fine.

Another note: the release is on branch 1_0_0_b2_dev. That is the branch I would 
use, or check out the tag (mynewt_1_0_0_b2_rc1_tag).

Thanks

> On Feb 7, 2017, at 8:07 AM, Christopher Collins  wrote:
> 
> Hi David,
> 
> Could your version of the newt tool be out of date?  Some backwards
> compatibility breaking changes were made about two weeks ago.  If that
> isn't the problem, could you grab a backtrace in gdb at the point of the
> crash ("bt" or "where" in gdb)?
> 
> Thanks,
> Chris
> 
> 
> On Tue, Feb 07, 2017 at 09:43:19AM -0500, David G. Simmons wrote:
>> Having some trouble this morning with the nrf52dk board.
>> 
>> 389  sysinit();
>> (gdb) n
>> 
>> Program received signal SIGTRAP, Trace/breakpoint trap.
>> __assert_func (file=file@entry=0x0, line=line@entry=0, func=func@entry=0x0, 
>> e=e@entry=0x0) at 
>> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:125
>> 125 asm("bkpt");
>> 
>> I've updated both mynewt_nordic and apache-mynewt-core to the latest develop 
>> branches, and
>> 
>> int
>> main(int argc, char **argv)
>> {
>>int rc;
>> 
>>/* Initialize OS */
>>sysinit();
>> 
>> ...
>> 
>> Fails at sysinit()
>> 
>> I've built a new bootloader (just in case). I thought maybe it was something 
>> I was doing in my app, so I built and loaded core/apps/bleprph and
>> 
>> 259  sysinit();
>> (gdb) n
>> 
>> Program received signal SIGTRAP, Trace/breakpoint trap.
>> __assert_func (file=file@entry=0x0, line=line@entry=0, func=func@entry=0x0, 
>> e=e@entry=0x0) at 
>> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:125
>> 125 asm("bkpt");
>> 
>> So it appears that something is broken for at least the nrf52dk dev board ...
>> 
>> cd repos/apache-mynewt-core/
>> DSimmons-Pro:apache-mynewt-core dsimmons$ git status -v
>> On branch develop
>> Your branch is up-to-date with 'origin/develop'.
>> cd ../mynewt_nordic/
>> DSimmons-Pro:mynewt_nordic dsimmons$ git status -v
>> On branch develop
>> Your branch is up-to-date with 'origin/develop'.
>> nothing to commit, working tree clean
>> 
>> dg
>> --
>> David G. Simmons
>> (919) 534-5099
>> Web  • Blog  • 
>> Linkedin  • Twitter 
>>  • GitHub 
>> /** Message digitally signed for security and authenticity.
>> * If you cannot read the PGP.sig attachment, please go to
>> * http://www.gnupg.com/  Secure your email!!!
>> * Public key available at keyserver.pgp.com 
>> **/
>> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
>> 
>> There are only 2 hard things in computer science: Cache invalidation, naming 
>> things, and off-by-one errors.
>> 
>> 
> 
> 



Re: Scheduling time of Nimble stack

2017-01-24 Thread will sanfilippo
w( ) < 1)
> {
>  /* just loop to wait the free time slot > 1 CPU time ticks */
>  /* Nimble events have higher task priority, will keep on running */
>  if (time_out)
>   {
> return(1);
>   }
> }
> 
> /** my event require 1 CPU time ticks run here **/
> 
> //
> 
> Does this make sense?
> 
> Thanks,
> 
> Jiacheng
> 
> 
> 
> 
>> 在 2017年1月24日,14:25,WangJiacheng <jiacheng.w...@icloud.com> 写道:
>> 
>> Thanks, Will,
>> 
>> It seems I can not get the things  work by the simple way. I just want to 
>> find out a free time slot at high level to access PHY resource such as CPU 
>> and radio RF exclusively. With your explain, I should interleave my events 
>> into BLE events at low level in the same schedule queue.
>> 
>> Best Regards,
>> 
>> Jiacheng
>> 
>> 
>>> 在 2017年1月24日,13:48,will sanfilippo <wi...@runtime.io> 写道:
>>> 
>>> Jiacheng:
>>> 
>>> First thing with the code excerpt below: TAILQ_FIRST always gives you the 
>>> head of the queue. To iterate through all the queue elements you would use 
>>> TAILQ_FOREACH() or you would modify the code to get the next element using 
>>> TAILQ_NEXT. I would just use TAILQ_FOREACH. There is an example of this in 
>>> ble_ll_sched.c.
>>> 
>>> Some other things to note about scheduler queue:
>>> 1) It is possible for items to be on the queue that have already expired. 
>>> That means that the current cputime might have passed sch->start_time. 
>>> Depending on how you want to deal with things, you are might be  better off 
>>> doing a signed 32-bit subtract when calculating time_tmp.
>>> 2) You are not taking into account the end time of the scheduled event. The 
>>> event starts at sch->start_time and ends at sch->end_time. Well, if all you 
>>> care about is the time till the next event you wont have to worry about the 
>>> end time of the event, but if you want to iterate through the schedule, the 
>>> time between events is the start time of event N minus the end time of 
>>> event N - 1.
>>> 3) When an event is executed it is removed from the scheduler queue. Thus, 
>>> if you asynchronously look at the first item in the scheduler queue and 
>>> compare it to the time now you have to be aware that an event might be 
>>> running and that the nimble stack is using the PHY. This could also cause 
>>> you to think that nothing is going to be done in the future, but when the 
>>> scheduled event is over that item gets rescheduled and might get put back 
>>> in the scheduler queue (see #4, below).
>>> 4) Events in the scheduler queue appear only once. This is not an issue if 
>>> you are only looking at the first item on the queue, but if you iterate 
>>> through the queue this could affect you. For example, say there are two 
>>> items on the queue (item 1 is at head, item 2 is next and is last). You see 
>>> that the gap between the two events is 400 milliseconds (I just made that 
>>> number up). When item 1 is executed and done, that event will get 
>>> rescheduled. So lets say item 1 is a periodic event that occurs every 100 
>>> msecs. Item 1 will get rescheduled causing you to really only have 100 
>>> msecs between events.
>>> 5) The “end_time” of the scheduled item may not be the true end time of the 
>>> underlying event. When scheduling connections we schedule them for some 
>>> fixed amount of time. This is done to guarantee that all connections get a 
>>> place in the scheduler queue. When the schedule item executes at 
>>> “start_time” and the item is a connection event, the connection code will 
>>> keep the current connection going past the “end_time” of the scheduled 
>>> event if there is more data to be sent and the next scheduled item wont be 
>>> missed. So you may think you have a gap between scheduled events when in 
>>> reality the underlying code is still running.
>>> 6) For better or worse, scanning events are not on the scheduler queue; 
>>> they are dealt with in an entirely different manner. This means that the 
>>> underlying PHY could be used when there is nothing on the schedule queue.
>>> 
>>> I have an idea of what you are trying to do and it might end up being a bit 
>>> tricky given the current code implementation. You may be better served 
>>> adding an item to the schedu

Re: [ATTENTION] incubator-mynewt-core git commit: os; spin up OS before calling. main() gets called in context of main task.

2017-01-24 Thread will sanfilippo
So you are saying that there will still be well-defined places where things get 
initialized and that there will be defined ranges for these stages? For example:

0 - 99 Before os_init() is called.
100-199 in os_init() after os_init() code executes
200-299: in os_start() somewhere

Realize that the above are just examples and not meant to be actual ranges or 
actual places where we initalize.


> On Jan 23, 2017, at 9:03 PM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Also, one other thing to look at with the new sysinit changes.  I think we 
> probably need to revise the ordering on device initialization.
> 
> Right now device init has the following:
> 
> /*
> * Initialization order, defines when a device should be initialized
> * by the Mynewt kernel.
> *
> */
> #define OS_DEV_INIT_PRIMARY   (1)
> #define OS_DEV_INIT_SECONDARY (2)
> #define OS_DEV_INIT_KERNEL(3)
> 
> #define OS_DEV_INIT_F_CRITICAL (1 << 0)
> 
> 
> #define OS_DEV_INIT_PRIO_DEFAULT (0xff)
> 
> And these stages are called:
> 
> In os_init():  PRIMARY, SECONDARY
> In os_start(): KERNEL
> 
> I think it makes sense to more clearly map these stages to the new sparsely 
> designed sysinit stages, and add device init hooks throughout the system 
> startup.
> 
> Given the new sparse IDs, I’m thinking that we could do it per-ID range, i.e. 
> os_dev_initializeall(100), os_dev_initializeall(200), etc.  Within that 
> range, devices could be initialized by priority.
> 
> Thoughts?
> 
> Sterling
> 
> On 23 Jan 2017, at 19:12, Jacob Rosenthal wrote:
> 
>> Looks like this breaks splitty as app, bleprph as loader
>> Error: Syscfg ambiguities detected:
>>Setting: OS_MAIN_TASK_PRIO, Packages: [apps/bleprph, apps/splitty]
>> Setting history (newest -> oldest):
>>OS_MAIN_TASK_PRIO: [apps/splitty:10, apps/bleprph:1, kernel/os:0xfe]
>> 
>> Setting OS_MAIN_TASK_PRIO in splitty to 1 made this go away..but Dont know
>> if theres other complications related to that though.Then it gets stuck
>> after confirming image and resetting while entering the app image at
>> gcc_startup_nrf51.s Default_Handler
>> 
>> On Mon, Jan 23, 2017 at 4:48 PM, marko kiiskila <ma...@runtime.io> wrote:
>> 
>>> I pushed this change to develop.
>>> 
>>> You’ll need to update the newt tool as part of this change; as sysinit
>>> calls should not include call to os_init() anymore.
>>> 
>>> After this change you can specify multiple calls to be made to your package
>>> from sysinit().
>>> Tell newt to do this by having this kind of block in your pkg.yml.
>>> 
>>> pkg.init:
>>>ble_hs_init: 200
>>>ble_hs_init2: 500
>>> 
>>> I.e. in pkg.init block specify function name followed by call order.
>>> 
>>> And app main() should minimally look like:
>>> 
>>> int
>>> main(int argc, char **argv)
>>> {
>>> #ifdef ARCH_sim
>>>mcu_sim_parse_args(argc, argv);
>>> #endif
>>> 
>>>sysinit();
>>> 
>>>while (1) {
>>>os_eventq_run(os_eventq_dflt_get());
>>>}
>>>assert(0);
>>> 
>>>return rc;
>>> }
>>> 
>>> So there’s a call to mcu_sim_parse_args() (in case app can execute in
>>> simulator),
>>> call to sysinit(), which calls all the package init routines, followed by
>>> this main task
>>> calling os_eventq_run() for default task.
>>> 
>>> I might also want to lock the scheduler for the duration of call to
>>> sysinit();
>>> but we don’t have that facility yet. This might be a good time to add it?
>>> 
>>>> On Jan 21, 2017, at 9:00 AM, will sanfilippo <wi...@runtime.io> wrote:
>>>> 
>>>> +1 sounds good to me. I dont think the amount of changes to the app are
>>> all that many and folks should be able to deal with them pretty easily.
>>>> 
>>>> 
>>>>> On Jan 20, 2017, at 1:35 PM, Sterling Hughes <
>>> sterling.hughes.pub...@gmail.com> wrote:
>>>>> 
>>>>> Hey,
>>>>> 
>>>>> Changed the subject to call this out to more people.  :-)
>>>>> 
>>>>> Response above, because I generally think this is on the right track.
>>> In my view, we should bite the bullet prior to 1.0, and move to this
>>> approach.  I think it greatly simplifies startup, and the concept of the
>>> default event queue now ties into their being a defaul

Re: Scheduling time of Nimble stack

2017-01-23 Thread will sanfilippo
Jiacheng:

First thing with the code excerpt below: TAILQ_FIRST always gives you the head 
of the queue. To iterate through all the queue elements you would use 
TAILQ_FOREACH() or you would modify the code to get the next element using 
TAILQ_NEXT. I would just use TAILQ_FOREACH. There is an example of this in 
ble_ll_sched.c.

Some other things to note about scheduler queue:
1) It is possible for items to be on the queue that have already expired. That 
means that the current cputime might have passed sch->start_time. Depending on 
how you want to deal with things, you are might be  better off doing a signed 
32-bit subtract when calculating time_tmp.
2) You are not taking into account the end time of the scheduled event. The 
event starts at sch->start_time and ends at sch->end_time. Well, if all you 
care about is the time till the next event you wont have to worry about the end 
time of the event, but if you want to iterate through the schedule, the time 
between events is the start time of event N minus the end time of event N - 1.
3) When an event is executed it is removed from the scheduler queue. Thus, if 
you asynchronously look at the first item in the scheduler queue and compare it 
to the time now you have to be aware that an event might be running and that 
the nimble stack is using the PHY. This could also cause you to think that 
nothing is going to be done in the future, but when the scheduled event is over 
that item gets rescheduled and might get put back in the scheduler queue (see 
#4, below).
4) Events in the scheduler queue appear only once. This is not an issue if you 
are only looking at the first item on the queue, but if you iterate through the 
queue this could affect you. For example, say there are two items on the queue 
(item 1 is at head, item 2 is next and is last). You see that the gap between 
the two events is 400 milliseconds (I just made that number up). When item 1 is 
executed and done, that event will get rescheduled. So lets say item 1 is a 
periodic event that occurs every 100 msecs. Item 1 will get rescheduled causing 
you to really only have 100 msecs between events.
5) The “end_time” of the scheduled item may not be the true end time of the 
underlying event. When scheduling connections we schedule them for some fixed 
amount of time. This is done to guarantee that all connections get a place in 
the scheduler queue. When the schedule item executes at “start_time” and the 
item is a connection event, the connection code will keep the current 
connection going past the “end_time” of the scheduled event if there is more 
data to be sent and the next scheduled item wont be missed. So you may think 
you have a gap between scheduled events when in reality the underlying code is 
still running.
6) For better or worse, scanning events are not on the scheduler queue; they 
are dealt with in an entirely different manner. This means that the underlying 
PHY could be used when there is nothing on the schedule queue.

I have an idea of what you are trying to do and it might end up being a bit 
tricky given the current code implementation. You may be better served adding 
an item to the schedule queue but it all depends on how you want to prioritize 
BLE activity with what you want to do.

Will

> On Jan 23, 2017, at 8:56 PM, WangJiacheng  wrote:
> 
> Hi, 
> 
> I’m trying to find out a free time slot between Nimble scheduled events.
> 
> I try to go through  all items on the schedule queue  global variable 
> “g_ble_ll_sched_q” to find out all the scheduled LL events near future, 
> function as
> //
> uint32_t ll_eventq_free_time_from_now(void)
> {
>  struct ble_ll_sched_item *sch;
>  uint32_t cpu_time_now;
>  uint32_t time_free;
>  uint32_t time_tmp;
>   
>  time_free = 10;
>  cpu_time_now = os_cputime_get32();
> 
>  /* Look through schedule queue */
>  while ((sch = TAILQ_FIRST(_ble_ll_sched_q)) != NULL)
>  {
>time_tmp = sch->start_time - cpu_time_now;
>if  (time_tmp < time_free)
>{
>   time_free = time_tmp;
>}
>  }
>   
>  return (time_free);
> }
> //
> 
> Does above function make sense to find out the free time at any given time 
> point? or any suggestion to find out the free time slot between LL events?
> 
> 
> Thanks,
> 
> Jiacheng
> 



Re: NimBLE host advertising data API

2017-01-24 Thread will sanfilippo
I am not sure I have any intelligent comments on this, but that has never 
stopped me from commenting in the past, so…

I think a byte buffer interface is fine as long as you have helper functions to 
create that buffer. Having folks have to figure out how to create an 
advertisement without any helper functions would be a bad idea (imho).

I am not sure I completely understand your example re:Tx Power Level. Would 
this field still get added by the host or would there be a helper function that 
a developer could call to add the Tx Power Level field to the advertisement?


> On Jan 24, 2017, at 11:45 AM, Christopher Collins  wrote:
> 
> Hello all,
> 
> I've mentioned this before - I really don't care for the advertising
> data API that we ended up with:
> http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_adv_set_fields/
> 
> I think we should change this API now before the 1.0 release.  I was
> wondering what others think.
> 
> The current API is high-level and is relatively easy to use, but
> requires a lot of code space and RAM.  I think a function which just
> takes a raw byte buffer (or mbuf) would be much better.  Then, there
> could be a helper function which converts an instance of `struct
> ble_hs_adv_fields` to a raw byte buffer.
> 
> A simple peripheral that always advertises the same data shouldn't be
> burdened with the ble_hs_adv_fields API.
> 
> There is actually a rationale as to why the API is the way it is today.
> There are a few fields in the advertisement data that the host can be
> configured to fill in automatically:
>* Flags (contains advertising type).
>* TX Power Level
> 
> I figured it would be safer if these values got calculated when
> advertising is initiated.  This is impractical if the host were handed a
> byte buffer rather than a series of fields.
> 
> Under the new proposal, the application would need to specify the
> correct advertising type when building the byte buffer, and the tx power
> level would be queried before the advertising procedure is actually
> started.  I don't think this will be a problem in practice, and I think
> the benefits in code size and RAM usage outweigh the API burden.
> 
> All thoughts welcome.
> 
> Thanks,
> Chris



Re: Scheduling time of Nimble stack

2017-01-24 Thread will sanfilippo
Jiacheng

1) Sorry about not converting msecs to os time ticks. Good catch!
2) I understand using a semaphore to wake up a task but looking at the exact 
code you have shown, I dont understand why the task would release the semaphore 
in this case. Doesnt the interrupt release the semaphore?
3) Blocking interrupts. If you block for 600-700 usecs you will cause failures 
in the underlying BLE stack. These wont be “catastrophic” (at least, I dont 
think so) but it can cause you to miss things like connection events, scan 
requests/responses, advertising events, etc. If your high priority interrupt 
fires off frequently you could possibly cause connections to fail. If you do it 
occasionally you should be ok.

> On Jan 24, 2017, at 5:08 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Thanks, Will, you help me  a lot.
> 
> Since my task is triggered by a semaphore, and the semaphore is released by 
> another interrupt routine,  so if my task have no enough time to running and 
> go to sleep, after wake up, it will release the semaphore again. Another 
> minor change is time unit conversion (ms -> OS tick) by function 
> os_time_ms_to_ticks(). 
> 
> The main body of my task will like
> //
> while (1) 
> {
>t = os_sched_get_current_task();
>assert(t->t_func == phone_command_read_handler);
>   
>/* Wait for semaphore from ISR */
>err = os_sem_pend(_phone_command_read_sem, OS_TIMEOUT_NEVER);
>assert(err == OS_OK);
> 
>> time_till_next = ll_eventq_free_time_from_now();
>> if (time_till_next > X) {
>>  /* Take control of transceiver and do what you want */
>> } else {
>>  /* Delay task until LL services event. This assumes time_till_next is 
>> not negative. */
>>  os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>  os_time_delay(os_time_ms_to_ticks((os_delay + 999) / 1000));
>
>/* Release the semaphore after wake up  */
>   err = os_sem_release(_phone_command_read_sem);
>   assert(err == OS_OK);
> 
>> }
> }
> //
> 
> I will test if this can work. BTW, current test results show there will be an 
> event collision between 2 stacks about 3~4 hours running.
> 
> I have a question about using interrupt disable, How long can the LL task be 
> blocked by interrupt disable? The high priority interrupt of Nordic’s 
> SoftDevice can be blocked only within 10us. I have an interrupt with most 
> high priority, it will take 600us~700us, is it safe to block LL task and 
> other interrupt such as Nimble Radio and OS time tick during this time?
> 
> Best Regards,
> 
> Jiacheng 
>   
> 
> 
>> 在 2017年1月25日,00:37,will sanfilippo <wi...@runtime.io> 写道:
>> 
>> Jiacheng:
>> 
>> Given that your task is lower in priority than the LL task, you are going to 
>> run into issues if you dont either disable interrupts or prevent the LL task 
>> from running. Using interrupt disable as an example (since this is easy), 
>> you would do this. The code below is a function that returns the time till 
>> the next event.:
>> 
>> os_sr_t sr;
>> uint32_t time_now;
>> int32_t time_free;
>> 
>> time_free = 1;
>> OS_ENTER_CRITICAL(sr);
>> time_now = os_cputime_get32();
>> sch = TAILQ_FIRST(_ble_ll_sched_q);
>> if (sch) {
>>   time_free = (int32_t)(sch->start_time - time_now);
>> }
>> OS_EXIT_CRITICAL();
>> 
>> /* 
>> * NOTE: if time_free < 0 it means that you have to wait since the LL task
>> * should be waking up and servicing that event soon.
>> */
>> return time_free;
>> 
>> Given that you are in control of what the LL is doing with your app, I guess 
>> you could do something like this in your task;
>> 
>> time_till_next = ll_eventq_free_time_from_now();
>> if (time_till_next > X) {
>>  /* Take control of transceiver and do what you want */
>> } else {
>>  /* Delay task until LL services event. This assumes time_till_next is 
>> not negative. */
>>  os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>  os_time_delay((os_delay + 999) / 1000);
>> }
>> 
>> So the problem with the above code, and also with the code you have below is 
>> something I mentioned previously. If you check the sched queue and there is 
>> nothing on it, you might think you have time, but in reality you dont 
>> because the LL has pulled the item off the schedu

Re: os_time_delay in milliseconds / microseconds

2017-01-26 Thread will sanfilippo
os_cputime_delay_ticks does not put the task to sleep; it was meant for short 
blocking delays. The nrf_delay_ms() function doesnt put the task to sleep 
either so I am not sure why you are seeing a difference between the two. 

> On Jan 26, 2017, at 6:03 AM, then yon  wrote:
> 
> Dear Jiacheng,
> 
> Thanks for your reply.
> 
> When i used os_cputime_delay_ticks() function it will cause my app hang and 
> it will never goes into idle stat.
> 
> I found the solution by using the nrf_delay_ms from nordic sdk.
> 
> Thank you.
> 
> Regards,
> 
> Then Yoong Ze
> 
> 
> On 26/1/2017 7:41 PM, WangJiacheng wrote:
>> Hi Then,
>> 
>> The OS time tick resolution is defined by OS_TICKS_PER_SEC.
>> 
>> If you want higher time resolution, use CPU time. The default CPU time tick 
>> is 1 microsecond, function os_cputime_delay_ticks() should be used.
>> 
>> Moreover, you can change CPU timing frequency to change CLOCK_FREQ and 
>> OS_CPUTIME_FREQ in syscfg.yml.
>> 
>> Jiacheng
>> 
>> 
>> 
>>> 在 2017年1月26日,16:00,then yon  写道:
>>> 
>>> Dear Support,
>>> 
>>> I'm working on a timing critical app; but the os_time_delay didn't gave me 
>>> a precise timing.
>>> 
>>> Currently the min delay i can get is more than 2ms with os_time_delay 
>>> function.
>>> 
>>> Somehow i notice that the clock time have up to microsecond precision; but 
>>> how do i make a delay with that?
>>> 
>>> Thank you.
>>> 
>>> Regards,
>>> 
>>> Then Yoong Ze
>> .
>> 
> 



Re: interrupt latency in mynewt

2017-01-28 Thread will sanfilippo
Jiacheng:

How are you measuring the latency? I presume you have a scope on a GPIO input 
and maybe set a GPIO high when you are inside the ISR and measure the time 
between them? Or are you measuring the timing using a task? There is certainly 
some hard limitation on interrupt response time but I am not sure what that is 
for the nrf52 specifically. If you tell me exactly how you are measuring the 
timing, what tasks you have running and their respective priorities, I might be 
able to hazard a guess as to why there are differences. I would also like to 
know what interrupts are enabled and their priorities.


> On Jan 27, 2017, at 6:38 PM, WangJiacheng  wrote:
> 
> Hi,
> 
> I have an interrupt triggered  by GPIO input, and observed different 
> interrupt latency from different CPU state. If all the tasks are sleep, the 
> interrupt latency is about 20us-30us, if the CPU is in idle mode with simple 
> calling “__WFI()”, the interrupt latency is about 10us-15us, and if the CPU 
> is running, the interrupt latency can be within 8us.
> 
> I do the test as following, create a low priority task with 3 case:
> 
> 1), the task loop is like
> while (1){
>   /* keep the task in sleep mode, the interrupt will be 20us-30us */
>os_time_delay(OS_TICKS_PER_SEC);
> }
> 
> 2). the task loop is like
> while (1){
>   /* put the CPU in idle mode by simple calling WFI, the interrupt will be 
> 10us-150us */
>__WFI;
> }
> 
> 3). the task loop is like
> while (1){
>   /* keep the CPU always running, the interrupt will be within 8us */
>   os_cputime_delay_usecs(100);
> }
> 
> Any idea to reduce the interrupt latency from all tasks are in sleep mode? or 
> there is a hard limitation of interrupt response time?
> 
> Thanks,
> 
> Jiacheng



Re: Scheduling time of Nimble stack

2017-01-25 Thread will sanfilippo
Well, things still might work even at 10-20msecs. All depends on the timing of 
the connection event in relation to the interrupts. You have to miss a number 
of connection events for a connection to drop. Will be interesting to see how 
it performs in those circumstances.

> On Jan 24, 2017, at 11:35 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Thanks, Will,
> 
> Yes, my semaphore code has problem to run, I have removed the code of release 
> the semaphore, and  use “goto” to check the free time again after my task 
> wake up.
> 
> The interrupt frequency depends on the phone’s status. For standby phone, 
> there will be an interrupt every 30s, this is not a big issue since 30s is a 
> quite long time. However, for active phone such as making a call,  there will 
> be several interrupts, and the time separation will only be 10ms-20ms,  this 
> will cause BLE connections to fail. I will continue to work on this issue.
> 
> Best Regards,
> 
> Jiacheng
> 
> 
> 
>> 在 2017年1月25日,14:36,will sanfilippo <wi...@runtime.io> 写道:
>> 
>> Jiacheng
>> 
>> 1) Sorry about not converting msecs to os time ticks. Good catch!
>> 2) I understand using a semaphore to wake up a task but looking at the exact 
>> code you have shown, I dont understand why the task would release the 
>> semaphore in this case. Doesnt the interrupt release the semaphore?
>> 3) Blocking interrupts. If you block for 600-700 usecs you will cause 
>> failures in the underlying BLE stack. These wont be “catastrophic” (at 
>> least, I dont think so) but it can cause you to miss things like connection 
>> events, scan requests/responses, advertising events, etc. If your high 
>> priority interrupt fires off frequently you could possibly cause connections 
>> to fail. If you do it occasionally you should be ok.
>> 
>>> On Jan 24, 2017, at 5:08 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
>>> 
>>> Thanks, Will, you help me  a lot.
>>> 
>>> Since my task is triggered by a semaphore, and the semaphore is released by 
>>> another interrupt routine,  so if my task have no enough time to running 
>>> and go to sleep, after wake up, it will release the semaphore again. 
>>> Another minor change is time unit conversion (ms -> OS tick) by function 
>>> os_time_ms_to_ticks(). 
>>> 
>>> The main body of my task will like
>>> //
>>> while (1) 
>>> {
>>>  t = os_sched_get_current_task();
>>>  assert(t->t_func == phone_command_read_handler);
>>> 
>>>  /* Wait for semaphore from ISR */
>>>  err = os_sem_pend(_phone_command_read_sem, OS_TIMEOUT_NEVER);
>>>  assert(err == OS_OK);
>>> 
>>>> time_till_next = ll_eventq_free_time_from_now();
>>>> if (time_till_next > X) {
>>>>/* Take control of transceiver and do what you want */
>>>> } else {
>>>>/* Delay task until LL services event. This assumes time_till_next is 
>>>> not negative. */
>>>>os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>>>os_time_delay(os_time_ms_to_ticks((os_delay + 999) / 1000));
>>>  
>>>  /* Release the semaphore after wake up  */
>>> err = os_sem_release(_phone_command_read_sem);
>>> assert(err == OS_OK);
>>> 
>>>> }
>>> }
>>> //
>>> 
>>> I will test if this can work. BTW, current test results show there will be 
>>> an event collision between 2 stacks about 3~4 hours running.
>>> 
>>> I have a question about using interrupt disable, How long can the LL task 
>>> be blocked by interrupt disable? The high priority interrupt of Nordic’s 
>>> SoftDevice can be blocked only within 10us. I have an interrupt with most 
>>> high priority, it will take 600us~700us, is it safe to block LL task and 
>>> other interrupt such as Nimble Radio and OS time tick during this time?
>>> 
>>> Best Regards,
>>> 
>>> Jiacheng 
>>> 
>>> 
>>> 
>>>> 在 2017年1月25日,00:37,will sanfilippo <wi...@runtime.io> 写道:
>>>> 
>>>> Jiacheng:
>>>> 
>>>> Given that your task is lower in priority than the LL task, you are going 
>>>> to run into issues if you dont either disable interrupts or prevent the LL 
>>

Re: Resources Reserved for Mynewt

2017-02-20 Thread will sanfilippo
I dont think a document exists which details all of the used resources. 
Obviously, it is based on the packages that are used in your application. Some 
general information:

OS uses TIMER1 or RTC1 (for os time)
Nimble stack uses TIMER0 for high resolution timer.
Nimble stack uses a number of the pre-programmed radio PPI.
Nimble stack uses the radio peripheral.

Other packages may use other interrupts (uart interrupts, spi, i2c, etc). Not 
sure what other PPI may be used.

Let us know if you have further questions. Should be fairly easy to determine 
which resources are used by which packages by searching the codebase.


> On Feb 20, 2017, at 3:37 AM, Lm Chew  wrote:
> 
> [https://tr.cloudmagic.com/h/v6/emailtag/tag/1487590625/e07327a00ce48c6feaf5c6318fd5666d/98148963e301c00d168ac75868cda51d/922b3d0cc2cdb8f9bb0eaaeb1a0d8dbc/7d73f5719844943a877cfefbca240ecc/newton.gif]
> 
> Hi,
> 
> Is there a document that list the resource reserved for Mynewt and what 
> resources free/safe for us to use on the Nrf52?
> 
> eg.
> What PPI channels is utilized by mynewt?
> What Timer used by mynewt?
> What Software Interrupt is used by mynewt?
> 
> Best Regards,
> Chew
> 



Re: Hackillinois this weekend in Urbana IL

2017-02-22 Thread will sanfilippo
Do not know how helpful this will be and it is just my own two cents so take it 
for what it is worth :-)

First, this is more of a favor/ask: if you have folks going through the 
installation process and the documentation, any feedback you can provide on 
what was easy/good/hard/bad/confusing would be great to know.

As far as contributions go I just have a couple of thoughts. Not sure if these 
will be a tall order or not. The first is adding a BLE profile. There are a 
number of defined profiles and some might be implementable in a short time. 
Another idea could be to add a driver for a sensor (or sensors).

Let us know how it goes!

> On Feb 22, 2017, at 1:34 PM, Jacob Rosenthal  wrote:
> 
> Hey newt folks,
> 
> Im mentoring at https://hackillinois.org/ this weekend on bluetooth and
> embedded in general
> 
> ~1000 Students will create and contribute to open source projects all
> weekend starting friday. Im not sure what skill levels and languages Ill
> have available to me, but if anyone has ideas for mynewt contribs Im
> definitely going to tell them about mynewt and bring some targets for them
> to play with.
> 
> --Jacob



[RESULT][VOTE] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-09 Thread will sanfilippo
Hello all,

Voting for Apache Mynewt 1.0.0-b2-incubating-rc1 is now closed.  The release 
has passed this step of the process.  The vote breakdown is as follows:

+1 Christopher Collins (binding)
+1 Sterling Hughes (binding)
+1 Jim Jagielski (binding)
+1 Szymon Janc
+1 Marko Kiiskila (binding)
+1 Padmasheela Kiiskila
+1 Vipul Rahane (binding)
+1 Will San Filippo (binding)
+1 David Simmons

Total: +6 binding, +3 non-binding

We can now call a vote on the general@incubator list.

Thank you to all who voted.
Will San Filippo

Re: BLE HCI support on NRF52DK

2017-02-10 Thread will sanfilippo
Hello Alan:

I may be reading this incorrectly or mistaken, but the host does not need to 
see the NOOP from the controller. The controller needs to be ready to receive 
the HCI Reset command from the host. At least, that is my understanding after 
the email exchange with Andrzej. I would have thought there would be a retry 
mechanism as well but that is not the case. So all you need to insure is that 
the controller is up and running before the host sends the HCI Reset.

Am I making sense? :-)

> On Feb 10, 2017, at 12:39 PM, Alan Graves <agra...@deltacontrols.com> wrote:
> 
> Hi Guys,
> 
> The BLE hardware I have to work with does not provide hardware flow control 
> with RTS/CTS. The CTS line is grounded and the RTS is left not connected. In 
> any case the BLE module is on its own board that is internally connected to 
> the Linux host processor. It is probably safe to assume that in this 
> situation the Nordic chip will be powered up and expecting the Host to be 
> ready to receive any messages sent via the BLE HCI before the Linux BlueZ 
> stack is initialized. Obviously I could arbitrarily delay the NOOP message 
> timing so that the two ends can be in sync, but to not have a timeout 
> mechanism on the HCI  protocol would seem to me to be a guarantee that a 
> deadlock condition would occur. Another possibility is that perhaps I can 
> find a way to keep the BLE hardware in a reset state until the Host is 
> initialized by driving the RESET signal with a GPIO line.
> 
> ALan
> 
> -Original Message-
> From: will sanfilippo [mailto:wi...@runtime.io] 
> Sent: Monday, February 06, 2017 5:55 PM
> To: dev@mynewt.incubator.apache.org
> Subject: Re: BLE HCI support on NRF52DK
> 
> Ah ok; that is quite interesting. I did not realize that was the case and I 
> was thinking of an external board that was powered off (and not quite 
> trusting the state of the flow control lines).
> 
> Then really the only thing we need to make sure on our end is that when UART 
> is brought up and the flow control line is properly de-asserted the nimble 
> stack sees any commands that were sent by the host (in the case where the 
> UART comes up first, then the rest of the nimble stack).
> 
> Will
> 
>> On Feb 6, 2017, at 10:27 AM, Andrzej Kaczmarek 
>> <andrzej.kaczma...@codecoup.pl> wrote:
>> 
>> Hi Will,
>> 
>> I could not find any timeout defined for HCI commands so the problem 
>> here would be when host should timeout and resend HCI Reset. I think 
>> we should just assume that hw is designed properly and flow control 
>> lines are either pulled or driven externally all the time so this is not 
>> overly complicated.
>> Actually, if you check Vol 4 Part A Section 1, it says that objective 
>> of UART TL is to have both ends on the same PCB and communication is 
>> free form errors, so there is no case that we suddenly have controller 
>> disconnected - I'd say above assumption is reasonable :-)
>> 
>> BR,
>> Andrzej
>> 
>> 
>> 
>> On Sat, Feb 4, 2017 at 12:25 AM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>>> Hi Andrzej
>>> 
>>> Thanks for pointing me to Vol 2 Part E, Section 4.4. I was recalling 
>>> a section of the spec that talked about this but could not find it 
>>> when I sent this email. Thus, I completely agree that the controller 
>>> sending a NOOP does not in any way indicate that it reset. It is fine 
>>> if the controller does send a NOOP, but the host cannot use that as 
>>> an indication that the controller reset. That does make things a bit 
>>> tricky though as you mention, but hopefully if something is really 
>>> badly out of sync the host will figure it out and reset the controller.
>>> 
>>> I was also thinking of the following scenario which I should have 
>>> explained a bit better. If the controller is powered off, it is not 
>>> driving the flow control line so I am not sure what would happen HW 
>>> wise in this case. It could be that the flow control line is 
>>> floating, and therefore the host could see it in various states. 
>>> Therefore, I would suspect that when a host issues a HCI Reset and 
>>> does not get a response for some amount of time, it just keeps issuing the 
>>> HCI Reset until it gets a response.
>>> 
>>> Given that a controller can send a NOOP on power up, I cant see how 
>>> we can guarantee that the following will NOT happen:
>>> 
>>> * Host sends HCI Reset
>>> * Controller sends NOOP
>>> * Controller sends Command Complete w/Reset opcode
>>> 
>>> I can also 

Re: [RFC] endianness API cleanup

2017-01-23 Thread will sanfilippo
Szymon:

Indeed, those endianness macros were put in ble.h because they were 
non-standard and acted on a buffer as opposed to just swapping bytes. 
Internally (quite some time ago) we debated using packed structures for  PDU 
protocol elements and we just never ended up deciding on what to do throughout 
the code. We did figure if we went the packed structure route the macros used 
(htole16) would get replaced with ones that just byte swap (if needed).

I looked over the changes and they look good to me. With these changes we 
should also go through the code and use packed structures elsewhere. This will 
definitely save a bunch of code as there will be no swapping since the protocol 
and host are little endian.

I think there are also macros in the host for endianness-related functions. Not 
sure if they have been renamed/replaced.


> On Jan 23, 2017, at 8:34 AM, Szymon Janc  wrote:
> 
> Hi,
> 
> While lurking in code I noticed that endianness APIs in Mynewt
> are bit strange and scattered around:
> - htole16, htobe16 etc are defined in "nimble/ble.h"
> - above mentioned functions have signatures different than same named
>  functions normally defined in endian.h
> 
> So to clean those up I propose following:
> - rename functions existing in ble.h to put_le16, get_le16 etc which are
>   intended for use on raw byte buffer
> - move those to endian.h
> - add standard htole16 etc definitions in endian.h
> 
> Some open points:
> 1) there are two functions in ble.h
> void swap_in_place(void *buf, int len);
> void swap_buf(uint8_t *dst, const uint8_t *src, int len);
>   that I also moved to endian.h for time being but I think that eventually
>   we should have "os/misc.h" (or utils.h) for such helpers
> 
> 2) I had to wrap macros in endian.h into #ifndef-endif since tests seem
>   to be including both os/ and system includes resulting in macro redefined
>   error
> 
> Code implementing above is available at [1].
> 
> Comments are welcome.
> 
> 
> [1] https://github.com/sjanc/incubator-mynewt-core/commits/endianness
> 
> -- 
> pozdrawiam
> Szymon K. Janc



Re: Issues with bleprph and blecent on nRF51822xxaa

2017-02-16 Thread will sanfilippo
Hello there Marcos:

Indeed, some of the sample apps probably wont run in 16KB RAM. If a malloc 
fails it should be pretty easy to debug as I would suspect most mallocs in the 
code assert() if they cant get the memory.

Is there a specific app your want to run?


> On Feb 16, 2017, at 8:19 PM, Marcos Scheeren  wrote:
> 
> Hi, Marko.
> 
> On Tue, Feb 14, 2017 at 2:33 PM, marko kiiskila  wrote:
>> Hi,
>> 
>> 
>> Quick peek to gdb sources tells me that the memory region is marked as
>> flash, and comment says that only allow writes during ‘load’ phase (which,
>> technically, I guess would be correct). Check the output of ‘info mem’, and 
>> see if you
>> can change the properties of it.
>> 
> 
> (gdb) info mem
> Using memory regions provided by the target.
> Num Enb Low Addr   High Addr  Attrs
> 0   y   0x 0x0004 flash blocksize 0x400 nocache
> 1   y   0x10001000 0x10001100 flash blocksize 0x100 nocache
> 2   y   0x2000 0x20004000 rw nocache
> 
> 
>> Alternative would be to convert the binary blob into a ihex or srecord 
>> format.
>> gdb can load these the same way as it can load elf. You can use objcopy
>> to do that. Note that elf has location data, as do ihex and srecord.
>> 
> 
> I tried "$ arm-none-eabi-objcopy bletest.elf.bin -O srec bletest.elf.bin.srec"
> but it yields: arm-none-eabi-objcopy:bletest.elf.bin: File format not 
> recognized
> 
> When inputting the .elf file, it converts ok to both srec and ihex and GDB
> accepts both just fine.
> 
> 
>> 
>> My guess the system is out of heap. Check while in gdb:
>> p/d sbrkBase-brk
>> 
>> Hopefully there are things you can prune out.
>> 
> 
> The output of p/d sbrkBase-brk in gdb:
> blehci: -5392
> bletest: -1120
> bleprph: -192
> bleprph (BLE_LL_CFG_FEAT_LE_ENCRYPTION: 0 // BLE_SM_LEGACY: 0):  -1072
> blecent: -1200
> 
>> 
>> Highly unlikely that the linker scripts would cause this.
>> I suspect it’s the RAM usage.
> 
> Could it be that for some examples/apps 16KB MCUs aren't just enough?
> 
>> 
>> Let me know how it goes,
>> M
>> 
>> 
> 
> Thank you.
> Marcos.



Re: Bluetooth specification question after seeing Android 7.1.1 disconnect

2017-01-17 Thread will sanfilippo
It was not a phone I was using. I think it was a Nexus 6P. And yeah, I 
shouldnt’t have said “Android” when I was mentioning the bug. I have used other 
Android phones and they dont have this issue. Well, I have used one other 
Android Phone (I think it was a Nexus 5x) and there was no issue.

Regarding the proposed fix. I agree that the spec does not mention what to do 
when a LL_REJECT_IND (or REJECT_IND_EXT) is received (outside of the control 
procedures where use of REJECT_IND is expected). The spec is quite clear in 
other areas though; for example, a Data Length Update procedure ends only when 
a LL_LENGTH_RSP is received or LL_UNKNOWN_RSP is received.

This might just be me, but I really dislike adding work-arounds to what are 
pretty clearly bugs and that also clearly violate the spec in other areas. I 
also "worry" that there might be other unintended consequences by doing this. 
For example, the nimble controller issues a connection update and the peer 
responds with LL_REJECT_IND. We cancel the procedure but the peer accepts the 
connection update (which would cause a supervision timeout).

I wonder if there is a work-around that would fix this particular issue with 
this controller that would not violate the spec in other areas? Dont get me 
wrong; I think your idea is very reasonable and makes sense. Especially if you 
have encountered this with other devices.


> On Jan 17, 2017, at 2:12 AM, Andrzej Kaczmarek 
> <andrzej.kaczma...@codecoup.pl> wrote:
> 
> Hi Will,
> 
> On Tue, Jan 17, 2017 at 5:48 AM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Hello:
>> 
>> Was wondering if there were any folks out there that could comment on
>> something regarding a disconnect issue with an Android Phone running 7.1.1
>> and our bluetooth stack (the controller).
>> 
> 
> Which phone do you use? Android has only host stack (Bluedroid) to this is
> likely specific to controller used in particular phone - I've seen similar
> problems when testing other controller and some "generic" Chinese phones.
> 
> 
>> 
>> What appears to be happening is this:
>> 
>> * Nimble wants to do Data Length Extension and enqueues a LL_LENGTH_REQ
>> when a connection gets created. Nimble is a peripheral btw.
>> * The Android controller wants to do a feature exchange so it enqueues a
>> LL_FEATURE_REQ.
>> * Android controller sends the LL_FEATURE_REQ.
>> * Nimble controller sends a LL_LENGTH_REQ.
>> * Once the nimble controller succeeds in sending the LL_LENGTH_REQ, it
>> sends the LL_FEATURE_RSP.
>> * Android responds with a LL_REJECT_IND with error code 0x24 LMP PDU not
>> allowed.
>> 
> 
> IIRC this is the same as I've seen (even the error code is the same) -
> don't have logs now though...
> 
> 
>> * Android resends the LL_FEATURE_REQ.
>> * Nimble responds with LL_FEATURE_RSP.
>> * Android sends LL_LENGTH_REQ
>> * Nimble controller sends LL_LENGTH_RSP.
>> * All goes fine until nimble controller times out due to a failed LL
>> control procedure: the nimble stack never received a LL_LENGTH_RSP.
>> 
>> NOTE: from the above it is hard to say why the Android controller sent the
>> LL_REJECT_IND. Basically, it appears that the LL_LENGTH_REQ messed up the
>> Android controller as the Android controller was expecting a LL_FEATURE_RSP.
>> 
>> My questions are the following:
>> * I think this is a bug on the part of the Android controller. The
>> specification allows for non-real time response to control PDU’s and it is
>> quite possible that a controller starts a procedure “at the same time” that
>> the remote controller starts a procedure. What I would have expected is
>> that the Android controller should have responded to the LL_LENGTH_REQ with
>> a LL_LENGTH_RSP. Eventually, the Android controller gets the LL_FEATURE_RSP
>> and all should have been fine. Do folks agree with this?
>> * A controller should not use a LL_REJECT_IND as a generic response when a
>> controller sends something unexpected. The LL_REJECT_IND is only used
>> during encryption procedures, connection parameter request update
>> procedures and in a couple of cases where there are Control Procedure
>> collisions. Note that the scenario described above is NOT one of the
>> Control Procedure collisions mentioned in the specification.
>> 
> 
> I agree, this is clearly issue on peer side - there is no procedure
> collision here since both length update and feature request can be handled
> at the same time. However, I think what Nimble should do here is to remove
> transaction once LL_REJECT_IND is received.
> 
> I know specification does use LL_REJECT_IND explicitly only in case o

Re: sys/stats and sys/log

2017-01-17 Thread will sanfilippo
I think the stub approach is fine as well.

> On Jan 17, 2017, at 1:43 PM, Kevin Townsend  wrote:
> 
> I don't have any issues with the stub approach myself, and it's easy to 
> switch back and forth (no more work than changing syscfg.yml)
> 
> 
> On 17/01/17 22:07, marko kiiskila wrote:
>> Hi,
>> 
>> at the moment it is not very easy to get rid of all code
>> related to logging and/or statistics.
>> I ran across this when trying to see how small I can
>> make an image while keeping BLE and OIC.
>> 
>> Therefore, I was going to create stub packages for
>> sys/stats and sys/log.
>> 
>> Then, within the app you can choose between a stub or
>> an actual implementation. We have this same model for
>> picking up implementation of console.
>> 
>> Alternative would be to make syscfg knobs for these.
>> However, I think I prefer the stub packages, I believe
>> that will make the code easier to read (less #ifdef's).
>> 
>> What do you guys think?
> 



Re: stopping scan & adv in bleprph example

2017-01-16 Thread will sanfilippo
If by deep sleep you mean “system off” mode requiring some form of wakeup, it 
is curently not implemented. You would have to hook that in yourself.

> On Jan 16, 2017, at 9:22 AM, Christopher Collins  wrote:
> 
> Hi Chew,
> 
> On Mon, Jan 16, 2017 at 11:33:23AM +, Lm Chew wrote:
>> Hi,
>> 
>> How do I stop the scan &  adv in the bleprph example.
>> 
>> I tried calling the ble_ll_scan_sm_stop(1) and  ble_ll_adv_stop in my app, 
>> but I am still able to see the device on my phone when I perform a scan.
> 
> To stop advertising, call: ble_gap_adv_stop()
> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_adv_stop/)
> 
> For BLE operations, an application should only use the host interface.
> Functions with the "ble_ll" prefix are defined by the controller, not
> the host, so your application should not call them.
> 
> Regarding scanning- the bleprph app doesn't perform any scanning, so
> there is no need to stop scanning.  This application only implements the
> peripheral role, so operations like scanning and initiating a connection
> are not compiled in.  However, if you have a different app which does
> support scanning, you would stop the scan procedure by calling
> ble_gap_disc_cancel()
> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_disc_cancel/)
> 
>> I am trying to switch between my custom rf stack  and nimble bt stack. So I 
>> need to disable nimble  operation before running my custom RF Stack.
>> And once I am done what I need using the custom RF Stack, I will switch back 
>> nimble.
>> 
>> Another question, how do you put the MCU to deep sleep while using nimble 
>> stack? In the example the MCU does not goes to deep sleep.
> 
> Sorry, I am not sure about this one.  I am not sure this is actually
> supported yet, but I'll let someone more knowledgable chime in.
> 
> Chris



Re: stopping scan & adv in bleprph example

2017-01-16 Thread will sanfilippo
Yes, Mynewt works the same way as FreeRTOS in this respect. Well, at least in 
the way you are describing FreeRTOS. We have a tickless OS and when we decide 
to go to sleep we are waiting for an interrupt to wake us up.

Regarding the radio: there are some registers that are only programmed once, so 
if you switch to your own custom RF stack and you want to switch back to 
bluetooth, you would either have to write some custom code or reset the link 
layer. There is an API to do this but I am not sure if it is accessible to the 
application developer.


> On Jan 16, 2017, at 5:08 PM, Lm Chew <lm.c...@free2move.se> wrote:
> 
> Hi Chris,
> 
> Thanks for the reply.
> 
> So calling ble_gap_adv_stop and ble_gap_disc_cancel will stop all radio 
> activity is that correct?
> 
> Is it safe to modify the Radio setting(on the physical just like in ble_phy) 
> after just calling these functions?
> 
> Hi Will,
> 
> Not exactly a "system off" I am looking for.
> Previously I am using FreeRTOS tickless mode where the MCU will remain in 
> sleep mode most of the tire unless there is a task to perform.
> 
> I am asking this because in the bleprph example I don't see any function 
> being called to put the MCU to sleep.
> 
> Does mynewt OS work the same way as FreeRTOS?
> 
> Best Regards,
> Chew
> 
> 
> 
> 
> 
> On Tue, Jan 17, 2017 at 1:57am, will sanfilippo 
> <wi...@runtime.io<mailto:wi...@runtime.io>> wrote:
> 
> If by deep sleep you mean "system off" mode requiring some form of wakeup, it 
> is curently not implemented. You would have to hook that in yourself.
> 
>> On Jan 16, 2017, at 9:22 AM, Christopher Collins <ccoll...@apache.org> wrote:
>> 
>> Hi Chew,
>> 
>> On Mon, Jan 16, 2017 at 11:33:23AM +, Lm Chew wrote:
>>> Hi,
>>> 
>>> How do I stop the scan &  adv in the bleprph example.
>>> 
>>> I tried calling the ble_ll_scan_sm_stop(1) and  ble_ll_adv_stop in my app, 
>>> but I am still able to see the device on my phone when I perform a scan.
>> 
>> To stop advertising, call: ble_gap_adv_stop()
>> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_adv_stop/)
>> 
>> For BLE operations, an application should only use the host interface.
>> Functions with the "ble_ll" prefix are defined by the controller, not
>> the host, so your application should not call them.
>> 
>> Regarding scanning- the bleprph app doesn't perform any scanning, so
>> there is no need to stop scanning.  This application only implements the
>> peripheral role, so operations like scanning and initiating a connection
>> are not compiled in.  However, if you have a different app which does
>> support scanning, you would stop the scan procedure by calling
>> ble_gap_disc_cancel()
>> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_disc_cancel/)
>> 
>>> I am trying to switch between my custom rf stack  and nimble bt stack. So I 
>>> need to disable nimble  operation before running my custom RF Stack.
>>> And once I am done what I need using the custom RF Stack, I will switch 
>>> back nimble.
>>> 
>>> Another question, how do you put the MCU to deep sleep while using nimble 
>>> stack? In the example the MCU does not goes to deep sleep.
>> 
>> Sorry, I am not sure about this one.  I am not sure this is actually
>> supported yet, but I'll let someone more knowledgable chime in.
>> 
>> Chris
> 



Bluetooth specification question after seeing Android 7.1.1 disconnect

2017-01-16 Thread will sanfilippo
Hello:

Was wondering if there were any folks out there that could comment on something 
regarding a disconnect issue with an Android Phone running 7.1.1 and our 
bluetooth stack (the controller).

What appears to be happening is this: 

* Nimble wants to do Data Length Extension and enqueues a LL_LENGTH_REQ when a 
connection gets created. Nimble is a peripheral btw.
* The Android controller wants to do a feature exchange so it enqueues a 
LL_FEATURE_REQ.
* Android controller sends the LL_FEATURE_REQ.
* Nimble controller sends a LL_LENGTH_REQ.
* Once the nimble controller succeeds in sending the LL_LENGTH_REQ, it sends 
the LL_FEATURE_RSP.
* Android responds with a LL_REJECT_IND with error code 0x24 LMP PDU not 
allowed.
* Android resends the LL_FEATURE_REQ.
* Nimble responds with LL_FEATURE_RSP.
* Android sends LL_LENGTH_REQ
* Nimble controller sends LL_LENGTH_RSP.
* All goes fine until nimble controller times out due to a failed LL control 
procedure: the nimble stack never received a LL_LENGTH_RSP.

NOTE: from the above it is hard to say why the Android controller sent the 
LL_REJECT_IND. Basically, it appears that the LL_LENGTH_REQ messed up the 
Android controller as the Android controller was expecting a LL_FEATURE_RSP.

My questions are the following:
* I think this is a bug on the part of the Android controller. The 
specification allows for non-real time response to control PDU’s and it is 
quite possible that a controller starts a procedure “at the same time” that the 
remote controller starts a procedure. What I would have expected is that the 
Android controller should have responded to the LL_LENGTH_REQ with a 
LL_LENGTH_RSP. Eventually, the Android controller gets the LL_FEATURE_RSP and 
all should have been fine. Do folks agree with this?
* A controller should not use a LL_REJECT_IND as a generic response when a 
controller sends something unexpected. The LL_REJECT_IND is only used during 
encryption procedures, connection parameter request update procedures and in a 
couple of cases where there are Control Procedure collisions. Note that the 
scenario described above is NOT one of the Control Procedure collisions 
mentioned in the specification.

Thanks!




Re: MBUF sizing for the bluetooth stack

2017-01-20 Thread will sanfilippo
Simon:

I think you are pretty much correct; generally you are better off with smaller 
size mbufs. However, there are cases where larger mbufs are better (for 
example, a very large portion of your data packets are large).

> On Jan 19, 2017, at 11:57 PM, Simon Ratner  wrote:
> 
> Thanks Chris,
> 
> It appears to me that there is questionable benefit to having mbufs sized
> larger than the largest L2CAP fragment size (plus overhead), i.e. the 80
> bytes that Will mentioned. Is that a reasonable statement, or am I missing
> something?
> 
> For incoming data, you always waste memory with larger mbufs, and for
> outgoing data host will take longer to free the memory (since you can't
> free the payload mbuf until the last fragment, as opposed to freeing
> smaller mbufs as you go), and you don't save on the number of copies in the
> host. You will save something on mbuf allocations and mbuf header overhead
> in the app as you are generating the payload, though.
> 
> When allocating mbufs for the payload, is there something I should do to
> reserve enough leading space for the ACL header to make sure host doesn't
> need to re-allocate it?
> 
> Also, at least in theory, it sounds like you could size mbufs to match the
> fragment exactly -- or pre-fragment the mbuf chain as you are generating
> the payload -- and have zero copies in the host. Could be useful in a
> low-memory situation, if the host was smart enough to take advantage of
> that?
> 
> 
> 
> 
> On Thu, Jan 19, 2017 at 11:13 AM, Christopher Collins 
> wrote:
> 
>> On Thu, Jan 19, 2017 at 10:57:58AM -0800, Christopher Collins wrote:
>>> On Thu, Jan 19, 2017 at 03:46:49AM -0800, Simon Ratner wrote:
 A related question: how does this map to large ATT_MTU and fragmented
 packets at the L2CAP level (assuming no data length extension)? Does
>> each
 fragment get its own mbuf, which are then chained together, or does the
 entire packet get reassembled into a single mbuf if there is room?
>>> 
>>> If the host needs to send a large packet, it packs the payload into an
>>> mbuf chain.  By "packs," I mean each buffer holds as much data as
>>> possible with no regard to the maximum L2CAP fragment size.
>>> 
>>> When the host sends an L2CAP fragment, it splits the fragment payload
>>> off from the front of the mbuf chain, constructs an ACL data packet, and
>>> sends it to the controller.  If a buffer at the front of mbuf can be
>>> freed, now that data has been removed, the host frees it.
>>> 
>>> If you are interested, the function which handles fragmentation and
>>> freeing is mem_split_frag() (util/mem/src/mem.c).
>> 
>> I rushed this response a bit, and there are some important details I
>> neglected.
>> 
>> * For the final L2CAP fragment in a packet, the host doesn't
>> do an allocating or copying.  Instead, it just prepends an ACL data
>> header to the mbuf chain and sends it to the controller.
>> 
>> * For all L2CAP fragments *other than the last*, the host allocates an
>> additional mbuf chain to hold the ACL data packet.  The host then copies
>> the fragment data into this new chain, sends it, and frees buffers from
>> the front of the original chain if possible.  The number of buffers that
>> get allocated for the fragment depends on how the maximum L2CAP fragment
>> size compares to the msys mbuf size.  If an msys mbuf buffer has
>> sufficient capacity for a maximum size L2CAP fragment, then only one
>> buffer will get allocated.  If the mbuf capacity is less, the chain that
>> gets allocated will consist of multiple buffers.
>> 
>> * An L2CAP fragment mbuf chain contains the following:
>>* mbuf pkthdr   (8 bytes)
>>* HCI ACL data header   (4 bytes)
>>* Basic L2CAP header(4 bytes)
>>* Payload   (varies)
>> 
>> * For incoming data, the host does not do any packing.  Each L2CAP
>> fragment is simply chained together.
>> 



Re: [RFC] Reducing size of BLE Security Manager

2017-01-20 Thread will sanfilippo
I have mixed feelings about packed structures. For processors that cannot 
handle unaligned accesses I have always found that they increased code size. 
Every access of an element in that structure needs code to determine the 
alignment of that element. Sure, they save RAM, so if that is what you want 
then fine, but code size? When you did this code size comparison did you do it 
on a processor that handles unaligned access? This can also impact the speed at 
which the code runs although that is rarely an issue.

About reducing copies. I am sure you know this, but folks should be careful 
doing something like mystruct = (struct mystruct *)om->om_data. You are not 
guaranteed that the data is contiguous so you better m_pullup first.

The controller does byte-by-byte copies and does not use packed structs. If we 
find that they generally svae code space we can modify that code as well.

> On Jan 20, 2017, at 8:21 AM, Christopher Collins  wrote:
> 
> Hi Szymon,
> 
> On Fri, Jan 20, 2017 at 10:21:16AM +0100, Szymon Janc wrote:
>> Hi,
>> 
>> I was recently looking on how we could reduce size of SM code.
>> So my proposal is to change the way PDUs are parsed and constructed.
>> 
>> Instead of having ble_sm_foo_parse(), ble_sm_foo_write() and ble_sm_foo_tx()
>> for parsing and constructing PDU byte by byte we could use packed structures
>> for describing PDU and let compiler figure out details related to
>> unaligned access.
> [...]
> 
> I think that's a great idea.  The ATT code does something similar,
> though there is probably more work to be done there.  In my opinion,
> using packed structs for parsing and encoding doesn't just reduce code
> size, it also simplifies the code.
> 
> Chris



Re: MBUF sizing for the bluetooth stack

2017-01-19 Thread will sanfilippo
That is a good question. I should let Chris answer this one as he knows for 
sure. I suspect you will have a chain of mbufs but I would have to look over 
the code to be sure.


> On Jan 19, 2017, at 3:46 AM, Simon Ratner <si...@proxy.co> wrote:
> 
> Hi Will,
> 
> A related question: how does this map to large ATT_MTU and fragmented
> packets at the L2CAP level (assuming no data length extension)? Does each
> fragment get its own mbuf, which are then chained together, or does the
> entire packet get reassembled into a single mbuf if there is room?
> 
> 
> 
> On Wed, Jan 11, 2017 at 4:57 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Yes; 76 or 80. Note that I have not actually tested with 80 byte mbuf
>> blocks. That is the theory though :-)
>> 
>>> On Jan 11, 2017, at 4:31 PM, Simon Ratner <si...@proxy.co> wrote:
>>> 
>>> Got it; by minimum size you mean the 76/80 bytes?
>>> 
>>> On Wed, Jan 11, 2017 at 4:17 PM, will sanfilippo <wi...@runtime.io>
>> wrote:
>>> 
>>>> Well, yes, there are “definitions” for these things. They are in various
>>>> places but they are there. Using them might get a bit tricky as you have
>>>> mentioned; not sure. You would have to make sure the right header files
>> get
>>>> included in the proper places...
>>>> 
>>>> Anyway, here are the definitions:
>>>> os mbuf header: sizeof(struct os_mbuf). Size = 16
>>>> os mbuf packet header: sizeof(struct os_mbuf_pkthdr) Size = 8
>>>> user header: sizeof(struct ble_mbuf_hdr) Size = 8 or 12
>>>> The HCI ACL data header: BLE_HCI_DATA_HSDR_SZ. 4 bytes
>>>> The LL PDU header: BLE_LL_PDU_HDR_LEN. 2 bytes
>>>> 
>>>> I would always make the size a multiple of 4 but the code should do that
>>>> for you; I just like to do it so the size you see in the syscfg
>> variable is
>>>> the actual memory block size.
>>>> 
>>>> Another thing I should mention: you should never add a buffer pool to
>> msys
>>>> smaller than the minimum size I mentioned if you are using the
>> controller.
>>>> This is something we will address in the future but for now it would be
>>>> bad. :-)
>>>> 
>>>> 
>>>> 
>>>>> On Jan 11, 2017, at 3:49 PM, Simon Ratner <si...@proxy.co> wrote:
>>>>> 
>>>>> Thanks for the detailed write-up, Will - very useful.
>>>>> 
>>>>> Are there defines for these things?
>>>>> Ideally, if I want a payload size of N, I'd like to specify in
>>>> syscfg.yml:
>>>>> 
>>>>>  MSYS_1_BLOCK_SIZE: '(N + MBUF_HEADER + PKT_HEADER + LL_OVERHEAD +
>>>> ...)'
>>>>> 
>>>>> And magically have optimally-sized buffers.
>>>>> 
>>>>> 
>>>>> On Wed, Jan 11, 2017 at 11:00 AM, will sanfilippo <wi...@runtime.io>
>>>> wrote:
>>>>> 
>>>>>> Hello:
>>>>>> 
>>>>>> Since this has come up on a number of different occasions I wanted to
>>>> send
>>>>>> out an email which discusses how the nimble stack uses mbufs. This
>> will
>>>> be
>>>>>> a controller-centric discussion but the concepts apply to the host as
>>>> well.
>>>>>> 
>>>>>> A quick refresher on mbufs: Mynewt, and the nimble stack, use mbufs
>> for
>>>>>> networking stack packet data. A “packet” is simply a chain of mbufs
>> with
>>>>>> the first mbuf in the chain being a packet header mbuf and all others
>>>> being
>>>>>> “normal” mbufs. A packet header mbuf contains a mbuf header, a packet
>>>>>> header and an optional user-defined header.
>>>>>> 
>>>>>> The length of the packet (i.e. all the data contained in all the mbuf
>>>>>> chains) is stored in the packet header. Each individual mbuf in the
>>>> chain
>>>>>> also contains a length which is the length of the data in that mbuf.
>> The
>>>>>> sum of all the mbuf data lengths = length of packet.
>>>>>> 
>>>>>> The amount of overhead in an mbuf and its size determine the amount of
>>>>>> data that can be carried in a mbuf. All mbufs have a 16-byte mbuf
>>>> header.
>>>>>> Packet header mbufs have an additional 8 bytes for t

Re: Packet length checks

2016-08-24 Thread will sanfilippo
Not sure if I am answering your question, and maybe you still consider this a 
security hole, but the lengths are checked in different places depending on the 
packet being received. For advertising channel packets, the lower-layer 
routines (ones that get called before the packet is handed to the link layer 
task) do length checking to make sure that the packet length matches the type 
of packet being sent. It is possible for memory to get corrupted between the 
check and handing it to the link layer task but not sure that is what you are 
concerned with.

Regarding data channel packets, I would have to go through the code to see 
exactly which length checks are made but it could be that the only length check 
made is one where the number of bytes pulled from the chip matches the value in 
the received data. Note that control packets are checked to make sure they are 
the appropriate length for that type of control frame.

To make a long story short, the length checks should be done before the packet 
gets handed to the link layer task.


> On Aug 24, 2016, at 3:04 PM, Tim Hutt  wrote:
> 
> Hi,
> 
> I've been looking through the code for ble_ll_rx_pkt_in(), and I can't see
> anywhere where the length of `rxbuf` (`m->om_data`) is checked. The value
> seems to be available in `m->om_len` but isn't read anywhere. Subsequent
> functions just seem to assume that a packet is as large as it should be.
> 
> Assuming I haven't misread the code, isn't that a huge security issue?
> 
> Cheers,
> 
> Tim



Re: nmgr, shell vs ble host

2016-08-31 Thread will sanfilippo
I am all for packages/libraries to have the ability to run from a single task 
and not have to create their own. Of course, we still must allow packages to 
create their own tasks and we need to make sure that these tasks run at a given 
priority. From what you are saying, that will still be allowed; you just want a 
way that a library/package that doesnt care about its own task to run within a 
single task context and have the OS provide for this.

It seems to me that creating an “application task” to handle this is easy 
enough to do. Any library/package that does not need its own task would 
“register” with this task (or library or package or whatever you want to call 
it) and there you go. The only “tricky” thing is the following: say the 
project/app you are building consists only of packages that use their own task 
and create it themselves and dont want to run in anyone else’s context. This 
means there is no need for this “app” task. Maybe I am looking at this the 
wrong way or missing something (quite likely) but how would that app task not 
get created in that case?

Will

PS RE: controller task. While a lot of the critical timing done by the 
controller occurs at interrupt context or is performed by the chip, a fair 
amount of work is done at the Link Layer task and I dont think it is advisable 
to move it all into interrupt context. I know you would love to get rid of the 
controller stack but I would not advise it. Doing “upper” link layer functions 
within the context of another task seems fraught with peril. If the LL task 
gets delayed by just a few msecs we could be missing connection and/or 
advertising events. Which while not catastrophic is not advisable.

And just an FYI: maybe this is simply how I have been doing these things for 
the last many years, but I prefer extremely short interrupts. Currently, there 
are what I consider very long interrupts and the only reason is so that we did 
not need two controller tasks: one for the timing critical LL functions and one 
for the not so timing critical LL functions. And that was just to save stack! 
:-)

> On Aug 31, 2016, at 4:17 PM, Sterling Hughes  wrote:
> 
> PS: I’d be interested in what Will has to say w.r.t controller, which may be 
> an exception — given that it will have hard timing requirements, and likely 
> need to be a high priority task.  My mail was meant to address more “appy” 
> (app-ish?) libraries.
> 
> On 31 Aug 2016, at 16:14, Sterling Hughes wrote:
> 
>> Hey,
>> 
>> I’ve been wondering how we should handle libraries in the OS, that need to 
>> run in a task context, but don’t necessarily need their own _dedicated_ task 
>> context.
>> 
>> I think right now we have two approaches:
>> 
>> - Create a library that takes an event queue, and expects to be called 
>> within a task context, but doesn’t create it’s own task (example: BLE host - 
>> http://mynewt.apache.org/latest/network/ble/ini_stack/ble_parent_ini/)
>> 
>> - Including that package creates its own task, and requires it to operate in 
>> that context.  (example: newtmgr - 
>> https://github.com/apache/incubator-mynewt-core/blob/master/libs/newtmgr/src/newtmgr.c#L504)
>> 
>> Personally, I think we should move the first approach taken by the bluetooth 
>> host for all system libraries that require a task context to operate.  I 
>> don’t see any reason why you couldn’t run newtmgr and BLE host in the same 
>> context, and save the RAM space by having a big app task that keeps posting 
>> to its own event queue.
>> 
>> What do folks think?  I think while we’re revising system init, it would be 
>> a good time to look at this, and come up with a consistent mode of operation.
>> 
>> Sterling



Re: hal watchdog

2016-08-30 Thread will sanfilippo
Sounds reasonable. As I am sure you know, doing it through the sanity task 
sometimes is an issue getting the time right as you would then need to know the 
worst-case timing of all the tasks that could be running… but any way you cut 
it, you have to put some time limit on that… in past lives I have seen some 
pretty complicated ways to deal with this but this seems reasonable and if 
developers need something different they can implement it with this hal.

> On Aug 30, 2016, at 10:39 AM, marko kiiskila <ma...@runtime.io> wrote:
> 
> Hi Will,
> 
>> On Aug 29, 2016, at 4:53 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> I have some questions: 
>> 
>> 1) What happens if the internal watchdog does not allow for a long timeout?
> 
> I was thinking of just returning an error from init in that case, but maybe we
> need a routine that returns the max supported timeout. And make at least
> the watchdogs for the current MCUs support at least 30 seconds?
> 
>> 2) When developers create the system and want a HW watchdog, what in the OS 
>> tickles the watchdog? Is that done by the sanity task or is it done by the 
>> OS in some other manner (os time tick, for example)? Or does the creator of 
>> the application need to provide for the tickle?
> 
> This would be done by sanity task. For folks who do not want to use sanity 
> task would have to come up with
> another mechanism.
> 
>> Thanks
>> 
>> PS I am not sure if memory serves (and it rarely does!) but I think I have 
>> worked on older MCU’s whose maximum internal watchdog timeout was < 1 
>> second. I dont know if current day MCU’s have this kind of limitation, but 
>> if they did, how would that be addressed? Or is it not a concern…
> 
> Argh, I had not considered such hardware. I think I would make the tickling 
> happen then on 2 layers;
> sub second stuff being internal to driver through a timer interrupt, and the 
> slower tickling happening
> through the watchdog API.
> 
>>> On Aug 29, 2016, at 4:40 PM, marko kiiskila <ma...@runtime.io> wrote:
>>> 
>>> Hi,
>>> 
>>> I was going to add support for hardware watchdog(s).
>>> The API I was thinking would be pretty simple.
>>> 
>>> The first user for this would be the sanity task.
>>> 
>>> —8<---
>>> /*
>>> * Set the watchdog time to fire no sooner than 'expire_secs' seconds from 
>>> now.
>>> */
>>> int hal_watchdog_init(int expire_secs);
>>> 
>>> /*
>>> * Tickles the watchdog. Needs to be done before 'expire_secs' fires.
>>> */
>>> int hal_watchdog_tickle(void);
>>> 
>>> /*
>>> * Stops the watchdog.
>>> */
>>> int hal_watchdog_stop(void);
>>> 
>>> —8<———
>>> 
>>> Let me know if this doesn’t seem right.
>> 
> 



Re: Removing os_error_t

2016-09-12 Thread will sanfilippo
Fine with me, but I also do like using BOOLEAN types as well and have functions 
return either TRUE or FALSE. I think it makes the code easier to read… so I 
hope we can still use TRUE or FALSE for some functions.

> On Sep 11, 2016, at 11:32 AM, Christopher Collins  wrote:
> 
> On Sun, Sep 11, 2016 at 10:42:07AM -0700, Sterling Hughes wrote:
> [...]
>> So — prior to 1.0, I think we should clean this up.  My proposal is to 
>> go with plain old integers as error codes across the system.  0 is no 
>> error, a negative value is the error code and a positive value can be 
>> used when returning actual data (and must be marked in the function 
>> signature/doxygen comment.)  Although, if we want to go the enum route, 
>> I’m happy with that too, but I think we should clean up the code that 
>> currently uses integers as return values (there is a lot of it), to move 
>> to cleanly specifying where the error parameters are.
> 
> I agree with removing enum return codes, as enums have odd semantics and
> don't provide any real benefit in my opinion [*].  0 for success,
> positive for data, negative for error sounds like a fine convention to
> me.
> 
> Chris
> 
> [*] The type of an enum is implementation defined (4).  The type of an enum
> value is always int (3): 
> 
> (ISO/IEC 9899:201x, 6.7.2.2)
> 3 The identifiers in an enumerator list are declared as constants that
> have type int and may appear wherever such are permitted. An enumerator
> with = defines its enumeration constant as the value of the constant
> expression. If the first enumerator has no =, the value of its
> enumeration constant is 0. Each subsequent enumerator with no = defines
> its enumeration constant as the value of the constant expression
> obtained by adding 1 to the value of the previous enumeration constant.
> (The use of enumerators with = may produce enumeration constants with
> values that duplicate other values in the same enumeration.) The
> enumerators of an enumeration are also known as its members.
> 
> 4 Each enumerated type shall be compatible with char, a signed integer
> type, or an unsigned integer type. The choice of type is
> implementation-defined, but shall be capable of representing the
> values of all the members of the enumeration.The enumerated type is
> incomplete until immediately after the } that terminates the list of
> enumerator declarations, and complete thereafter.



HAL Timer API

2016-09-08 Thread will sanfilippo
Hello:

We are working on a HAL API for timer peripherals. This HAL will be used to 
provide high resolution timers and not an OS based timer; that already exists. 
This HAL provides the ability to receive a callback when a timer expires and 
also to do short, blocking delays. Timer expiration can be set relative to 
“now” (hal_timer_start) or to occur at a specified timer tick value 
(hal_timer_start_at). 

Questions/comments:

1) I think that one of the major points of contention with this API will be the 
use of ticks instead of standard units of time (microseconds, milliseconds, 
etc). There is a helper function to convert ticks to microseconds (and vice 
versa) but that is about all. We have reasons why we chose ticks which I would 
be happy to discuss with those interested.

2) How do you think the hal_timer_start_at() API should act if the specified 
tick has already passed? Should it call the callback at the current context or 
return an error? I am leaning towards returning an error code to indicate this.

/* HAL timer struct */
typedef void (timer_cb *)(void *arg);

struct hal_timer
{
timer_cbcb_func;
void*cb_arg;

/* NOTE: these are here to denote some internals that will be kept. 
These may change or there may be addtions */
uint32_texpiration_tick;
struct hal_timer *next;
}

/* Initialize the timer at the given frequency */
int hal_timer_init(int timer_num, uint32_t freq_hz);

/*
 * Returns the resolution of the timer. NOTE: the frequency may not be
 * obtainable so the caller can use this to determine the resolution.
 * Returns resolution in nanoseconds.
 */
uint32_t hal_timer_get_resolution(int timer_num);

/* Convert ticks to usecs */
uint32_t hal_timer_ticks_to_usecs(int timer_num, uint32_t ticks);

/* Convert microsseconds to ticks */
uint32_t hal_timer_usecs_to_ticks(int timer_num, uint32_t usecs);

/* Returns the timers current tick value */
uint32_t hal_timer_read(int timer_num);

/* Perform a blocking delay for a number of ticks. */
int hal_timer_delay_ticks(int timer_num, uint32_t ticks);

/* Initialize the HAL timer structure with the callback and the callback 
argument */
int hal_timer_set_cb(struct hal_timer *, timer_cb cb_func, void *);

/* Start a timer that will expire in ‘ticks’ ticks. Ticks cannot be 0 */
int hal_timer_start(struct hal_timer *, uint32_t ticks);

/* Stop a currently running timer */
int hal_timer_stop(struct hal_timer *);

/*
 * Start a timer that will expire when the timer reaches ‘tick’.
 * If the tick has already passed 
 */
int hal_timer_start_at(struct hal_timer *, uint32_t tick);












  1   2   3   >