On 2013-04-29 1:51 PM, Taras Glek wrote:
* How to robustly write/update small datasets?

#3 above is it for small datasets. The correct way to do this is to
write blobs of JSON to disk. End of discussion.

For an API that is meant to be used by add-on authors, I'm afraid the situation is not as easy as this. For example, for a "simple" key/value store which should be used for small datasets, one cannot enforce the implicit requirement of this solution (the data fitting in a single block on the disk. for example) at the API boundary without creating a crappy API which would "fail" some of the times if the value to be written violates those assumptions. In practice it's not very easy for the consumer of the API to guarantee the size of the data written to disk if the data is coming from the user, the network, etc.

Writes of data <= ~64K should just be implemented as atomic whole-file
read/write operations. Those are almost always single blocks on disk.

Writing a whole file at once eliminates risk of data corruption.
Incremental updates are what makes sqlite do the WAL/fsync/etc dance
that causes much of the slowness.

Is that true even if the file is written to more than one physical block on the disk, across all of the filesystems that Firefox can run on?

As you can see from above examples, manual IO is not scary

Only if you trust the consumer of the API to know the trade-offs of what they're doing. That is not the right assumption for a generic key/value store API.

* What about fsync-less writes?
Many log-type performance-sensitive data-storage operations are ok with
lossy appends. By lossy I mean "data will be lost if there is a power
outage within a few seconds/minutes of write", consistency is still
important. For this one should create a directory and write out log
entries as checksummed individual files...but one should really use
compression(and get checksums for free).
https://bugzilla.mozilla.org/show_bug.cgi?id=846410 is about
facilitating such an API.

Use-cases here: telemetry saved-sessions, FHR session-statistics.

This is an interesting use case indeed, but I don't think that it falls under the umbrella of the API being discussed here.

* What about large datasets?
These should be decided on a case-by-case basis. Universal solutions
will always perform poorly in some dimension.

* What about indexeddb?
IDB is overkill for simple storage needs. It is a restrictive wrapper
over an SQLite schema. Perhaps some large dataset (eg an addressbook) is
a good fit for it. IDB supports filehandles to do raw IO, but that still
requires sqlite to bootstrap, doesn't support compression, etc.
IDB also makes sense as a transitional API for web due to the need to
move away from DOM Local Storage...

Indexed DB is not a wrapper around SQLite. The fact that our current implementation uses SQLite is an implementation detail which might change. (And it's not true on the web across different browser engines.)

I'm sure that if somebody can provide testcases on bad IndexedDB performance scenarios we can work on fixing them, and that would benefit the web, and Firefox OS as well.

* Why isn't there a convenience API for all of the above recommendations?
Because speculatively landing APIs that anticipate future consumers is
risky, results in over-engineering and unpleasant surprises...So give us
use-cases and we(ie Yoric) will make them efficient.

The use case being discussed here is a simple key/value data store, hopefully with asynchronous operations, and safety guarantees against dataloss. I do not see the current discussion as speculative at all.

Cheers,
Ehsan

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to