On Tue, Apr 28, 2015 at 4:16 AM, James K. Lowden <jklowden at schemamania.org> wrote:
> A major hurdle is the > memory model: because array-programming libraries normally mandate the > data be in contiguous memory, there's a cost to converting to/from the > DBMS's B+ tree. The more array-like the physical storage of the DBMS, > the more it cedes transactional and update capability. > Well, just look at how Oracle solved that problem. The row data doesn't hold the blob itself, like in SQLite, but an index to separate blob pages. This proverbial level of indirection brings tremendous benefits, because you can then update a blob w/o having to rewrite the whole blob (you copy the "blob" page(s) being updated only, and copy the row into a new "data" page with an updated "blob index" with just a few entries changed to point to the updated pages. You can thus easily extend a blob w/o rewriting it all like SQLite does now, or not break transactionality when doing in-place updates using the SQLite blob API like now too. This is IMHO one of the biggest problem with SQLite right now, even as its primary purpose of a "application format". That and the lack of arrays and UDTs. (in this order, i.e. blobs, arrays, UDTs). FWIW :). I love SQLite. Doesn't make me blind to what it lacks for my use cases though. My $0.02. --DD

