-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Dimitris P. Servis wrote:
> 2) The file would be portable and movable (i.e. copy-paste will do, no
> special arrangement to move around)

You can do it if you drop that requirement and make it a single
directory.  The easiest way of dealing with large blobs is to save them
as files and record the filename in the database, do you could put the
files and the SQLite db in the same directory.

Depending on the content you can also do things like break the blobs
into pieces (eg each 10MB in size) and then have the filename be the md5
of the data.  If there is duplication between blob contents then this
will help save space, and is also a race free to create the blobs.

> Ideally I would like to provide client programs with a stream to read
> and write files.

If you have to provide it at the SQLite level then look into virtual
tables.  You can have a backend provider that aggregates the chunks back
into blobs.  You can even make tables where each chunk is a row of an
overall blob as the SQLite api only gives you the entire blob.

> I should store my nice scientific data in
> tables and define good relations and suff.

I can't believe each multi hundred meg file is a single indivisible
piece of data.  You could store the real data as proper fields in
SQLite.  And if SQL semantics aren't quite what you want, you can
certainly use virtual tables to make the data appear in any form you
want on demand.

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF/N1rmOOfHg372QQRAqaKAKDBT4fBFTzsCPuuHOMXdL1E9Y9heACfYST5
kyqWlQUCWwc7J6pYCLeuIDI=
=YqhO
-----END PGP SIGNATURE-----

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to