Hi Gunar,

I'm not very experienced on the driver development side, but enough as a user to see some issues.

On 9/26/18 9:27 AM, Gunar Schorcht wrote:

- Should a driver be as complete as possible, which of cource produces
more code, or should it be kept simple to produce small code? One option
would be to use the pseudomodule approach to enable additional features.

Part of keeping it small is omitting conversion code (see answer below).

How often does it happen that one runs out of flash space? I'm asking because I honestly don't know. I do know that it's probably easier for the user to remove stuff if he runs out of flash than to read the device manual and add the missing functions if the driver is incomplete.

On some platforms unused code is not linked into the binary.


Unused functions, where the linker can determine the function is not used. If you have a big function for configuring device modes, but you never call it with certain parameters and a bit chunk goes unused, it may not be optimized away (I'm not sure if LTO changes this).

- Should a driver support at least data-ready interrupts (if possible at
all) to realize event-driven data retrieval?


Yes. Totally yes. Polling is dumb:

* Goes against low power goals.
* The data is not polled with a clock that is synchronized with the sensor clock (if the sensor has an ADC), meaning unpredictable jitter.

- Should a driver always return normalized/converted data, or rather
return the raw data and the application needs to convert them? The
conversion is sometimes quite complex. I saw both approaches of them for
similar sensors.


RAW data.

* Conversion usually results in loss of precision, especially if one limits the word length to something like 16 bits (see answer below). * Doing conversion "right" (in an unbiased way) is non trivial. You cannot just go around truncating digits. * Is is beyond the scope of the driver, which should handle device communication/configurations only.
* If the converted value is not needed, the conversion cannot be undone.
* In SAUL, conversion to and from the base-10 floating point format used is really painful.

I think the measurement should be raw, and there should be a way to query the conversion constant. This way the user can choose, and there are not unnecessary computations done.

In control applications, for example, the conversion is totally not necessary, as the conversion constants can be folded into the control system constants.

The design rules that are clear to me are:

- Drivers have to provide an interface for polling with init and read
that is compatible with SAUL.


Yes. It makes all interfaces consistent. That being said, it is sad that there is no unified way for configuring and for interrupt driven measurements.

- Output are always 16 bit integers.


I think it is a bad idea to limit output to 16 bits. ADCs meant for scales, for example, usually have 24 bit [1]. Other applications also demand higher that 16 bits. Keep in mind that 16 bits is equivalent to 4,8 decimal digits, take 1 bit for the sign and you are left with 4,5.

[1] http://www.analog.com/en/products/ad7799.html

What else?


Maybe off topic, but I think we need a IO layer (think SAUL, but more complete) so that the user does not have to directly interact with drivers. I would answer many of your questions, as in that case there would be a well defined interface that device drivers would have to expose. It is an OS, after all.

Regards,

Juan I Carrano.
_______________________________________________
devel mailing list
devel@riot-os.org
https://lists.riot-os.org/mailman/listinfo/devel

Reply via email to