On Tue, Oct 30, 2018 at 4:58 AM Keith Medcalf <kmedc...@dessus.com> wrote:

>
> If you don't mind me asking, what sort of data are you collecting?
> Are you the master (ie, scanning) or a slave (getting async data pushed to
> you).
> Are you "compressing" the returned data (storing only changes exceeding
> the deadband) or are you storing every value (or is the source instrument
> doing compression)?
>
I presume you need to store the TimeStamp, Point, Value and Confidence.
> What is the data rate (# Points and Frequency)
>

The bulk of data consists of streams of AC signals being pushed from a
handful of 3-axis accelerometers which are more or less synchronous.
Data rate is in the order of a few hundreds samples/sec for each sensor.
A first software layer handles buffering and passes one-second buffers to a
second software layer which then saves it to the database for later
analysis.
Database schema currently consists of a single table with roughly the
following columns: timestamp, sensorid (string), datatype (string) and a
string containing the JSON encoding of those few hundred samples (as a JSON
array).
So each row takes up about 3-4KBs (~200 samples * ~5 bytes/samples * 3 axes
+ overhead).
At a later stage one may want to pack together adjacent chunks into even
longer strings (so to reduce the total number of rows) and/or store data in
a more efficient manner (e.g. in binary or compressed form).
I don't particularly like this way of storing logical streams (i.e. Time
Series) in chunks but I could find any better way.

There's also some way-less-frequent readings (scalar quantities being
collected every 30 seconds or so) currently being stored in the same table.

Any suggestion on how to improve this?
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to