Re: [sqlite] Writing in a blob

2013-04-28 Thread Eduardo
On Fri, 26 Apr 2013 14:02:02 -0400
Roland Hughes rol...@logikalsolutions.com wrote:

 It actually does make sense to add chunking to sqlite.  There would be
 some computational overhead, but, that all depends on the chunk size and
 the cache size of the database.  It makes no sense to implement YAFS
 (Yet Another File System) inside of SQLite.

I agree with you that a generic YAFS is no sense, but a pseudo-fs for your app 
data, I think yes. 

Also, distributed file systems use databases (Lustre, f.ex.) to store 
filesystem metadata, and a simple distributed file system with basic semantics, 
including locks, can easily be done with sqlite.


-- 
Eduardo emorr...@yahoo.es
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Roland Hughes
Speaking as an IT professional with 20+ years in the field, I would have
to say adding any kind of file system support to SQLite would be a
horrible thing.  Yes, I've used Oracle.  I've also used the only real
product Oracle has, RDB on OpenVMS.  I've written books covering MySQL,
PostgreSQL, and Object databases like POET.

Lite is called lite for a reason.  Bare functionality with a lot of
speed.

The architects for this product need to take a lesson from the old DOS
xBase systems.  Blobs should not be handled as one big unit.   They need
to be given unique ID's and stored in fixed size chunks off in a hidden
table much like a memo field was back in the day.  The hidden or
child table supporting the blob column would have a key of ID +
sequence.  The actual row should be ID, Sequence, BytesUsed, Chunk.
They BytesUsed allows you to keep exact byte sizes.  All Chunk data
types should be a raw byte data type.  There should be multiple chunk
column types:  chunk24, chunkM, and chunkXM for 1024 bytes, 1Meg, and
10Meg chunk column types.


On Tue, 2013-04-23 at 09:50 +0200, Dominique Devienne wrote:

 On Mon, Apr 22, 2013 at 2:10 PM, Simon Slavin slav...@bigfraud.org wrote:
 
  On 22 Apr 2013, at 12:39pm, Paolo Bolzoni wrote:
   But I noticed that sqlite3_blob_write cannot increase the size of the
  pointed
   open blob. So I ask, there is a way to treat a blob as a stream so I can
  write
   or read values in it with ease?
 
  Unfortunately the size (length) of the BLOB is very significant to the
  larger space-handling aspects of SQLite's file format.  Making a BLOB
  longer could force SQLite to move the data from page to page and do lots of
  other internal reorganisation.  So you can reserve extra space when you
  write the BLOB, and you can read whatever you want, but the documentation
  is accurate.
 
 
 I also really wish SQLite blobs would map directly to the usual FILE*
 semantic, both in being able to grow a blob via writing (and truncate it
 too), but also and more importantly not rewriting the whole row or blob
 when modifying only a few bytes of the blob, but only affected pages.
 Basically another level of indirection, where the row holds only a blob
 locator (like in Oracle), and the blob value is in separate, not
 necessarily contiguous pages, as described here for example:
 http://jonathanlewis.wordpress.com/2013/03/22/lob-update/. That way only
 modified blob pages would need to participate in the transaction. SQLite is
 not MVCC like Oracle, but the ability to at least not overwrite the whole
 blob when changing 1 byte would be great. (I'm assuming there isn't, but
 I'm no SQLite expert). My $0.02. --DD
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


-- 
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net

No U.S. troops have ever lost their lives defending our ethanol
reserves.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Paolo Bolzoni
Writing a FS as sqlite3 as backend sounds an interesting challenge.

But I would like recalling everyone, that the question was about
writing an arbitrary precision integer in the DB considering that
the library writes the representation in a FILE*.

At the end I wrote a little FILE* wrapper around blob_open
handles that support only the modes r or w. And I applied to
an existing blob or to newly created zeroblob.
It seems working fine.

Paolo

On Fri, Apr 26, 2013 at 5:30 PM, Roland Hughes
rol...@logikalsolutions.com wrote:
 Speaking as an IT professional with 20+ years in the field, I would have
 to say adding any kind of file system support to SQLite would be a
 horrible thing.  Yes, I've used Oracle.  I've also used the only real
 product Oracle has, RDB on OpenVMS.  I've written books covering MySQL,
 PostgreSQL, and Object databases like POET.

 Lite is called lite for a reason.  Bare functionality with a lot of
 speed.

 The architects for this product need to take a lesson from the old DOS
 xBase systems.  Blobs should not be handled as one big unit.   They need
 to be given unique ID's and stored in fixed size chunks off in a hidden
 table much like a memo field was back in the day.  The hidden or
 child table supporting the blob column would have a key of ID +
 sequence.  The actual row should be ID, Sequence, BytesUsed, Chunk.
 They BytesUsed allows you to keep exact byte sizes.  All Chunk data
 types should be a raw byte data type.  There should be multiple chunk
 column types:  chunk24, chunkM, and chunkXM for 1024 bytes, 1Meg, and
 10Meg chunk column types.


 On Tue, 2013-04-23 at 09:50 +0200, Dominique Devienne wrote:

 On Mon, Apr 22, 2013 at 2:10 PM, Simon Slavin slav...@bigfraud.org wrote:

  On 22 Apr 2013, at 12:39pm, Paolo Bolzoni wrote:
   But I noticed that sqlite3_blob_write cannot increase the size of the
  pointed
   open blob. So I ask, there is a way to treat a blob as a stream so I can
  write
   or read values in it with ease?
 
  Unfortunately the size (length) of the BLOB is very significant to the
  larger space-handling aspects of SQLite's file format.  Making a BLOB
  longer could force SQLite to move the data from page to page and do lots of
  other internal reorganisation.  So you can reserve extra space when you
  write the BLOB, and you can read whatever you want, but the documentation
  is accurate.
 

 I also really wish SQLite blobs would map directly to the usual FILE*
 semantic, both in being able to grow a blob via writing (and truncate it
 too), but also and more importantly not rewriting the whole row or blob
 when modifying only a few bytes of the blob, but only affected pages.
 Basically another level of indirection, where the row holds only a blob
 locator (like in Oracle), and the blob value is in separate, not
 necessarily contiguous pages, as described here for example:
 http://jonathanlewis.wordpress.com/2013/03/22/lob-update/. That way only
 modified blob pages would need to participate in the transaction. SQLite is
 not MVCC like Oracle, but the ability to at least not overwrite the whole
 blob when changing 1 byte would be great. (I'm assuming there isn't, but
 I'm no SQLite expert). My $0.02. --DD
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 --
 Roland Hughes, President
 Logikal Solutions
 (630)-205-1593

 http://www.theminimumyouneedtoknow.com
 http://www.infiniteexposure.net

 No U.S. troops have ever lost their lives defending our ethanol
 reserves.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Stephen Chrzanowski
+0.75 to Roland for splitting the data, and another +1 for reiterating the
true functionality of what LITE means.

Splitting the data into chunks will help with keeping a database small as
it can throw the raw data into pages that were previously marked as
deleted, thereby not increasing the database sizes.  I think raw blob data
is already stored as a separate entity within the DB, but, I'm only about
5% through reading the 4meg amalgamation source code.  (Yes, I'm reading
line by line, manually.  Fortunately I've written a book program that
reads text files and remembers what page I'm on. ;))

The loss of 0.25 comes with the contradiction of light versus increased
work load.  Streaming a chunk of data as one huge thing is going to be
faster in regards to writing and reading.  You write data first byte to
last, and only stop at the EOF or if an IO error comes up.  Read the data
in with the same thought process.  No real CPU processing is going to be
affected. Now if you start throwing in splitting the data, you're going to
start throwing processing overhead to determine unused records, pages, and
what not.  Now on write the DBMS has to figure out what isn't used, see if
it has enough space in the page to write, and so on.  This takes away from
the speed part of Lite.  Reading will be less intensive, however, more
thought has to happen to properly read in the data.

So now the question comes to which camp wins?  The streaming group, or the
partitioning group?  Both can, and still maintain backwards compatibility
for reading, however, databases written with the partitioning of the blobs,
older versions of SQLite3 won't know what is going on, so it'll break.

Assuming (since I'm not sure) that the streaming option is what is
implemented in the current implementation of SQLite3, that particular code
base stays in place.  When a database is created, a PRAGMA can be set PRIOR
to the first write of any BLOB data that would indicate how the database
writes the data.  Resetting this pragma would have no effect after the
first BLOB is written, even if all BLOB data is removed.  By default, the
old method can be used so older versions of SQLite can handle the data.
This could potentially be extended to a per-table field for future
endeavors.

So now we're looking at one extra check to see how the data should be
written and read, which takes away from speed (Albeit extremely small slice
of time) however, both camps win.

ALL THAT SAID, I doubt it'd get implemented due to



On Fri, Apr 26, 2013 at 11:30 AM, Roland Hughes rol...@logikalsolutions.com
 wrote:

 Speaking as an IT professional with 20+ years in the field, I would have
 to say adding any kind of file system support to SQLite would be a
 horrible thing.  Yes, I've used Oracle.  I've also used the only real
 product Oracle has, RDB on OpenVMS.  I've written books covering MySQL,
 PostgreSQL, and Object databases like POET.

 Lite is called lite for a reason.  Bare functionality with a lot of
 speed.

 The architects for this product need to take a lesson from the old DOS
 xBase systems.  Blobs should not be handled as one big unit.   They need
 to be given unique ID's and stored in fixed size chunks off in a hidden
 table much like a memo field was back in the day.  The hidden or
 child table supporting the blob column would have a key of ID +
 sequence.  The actual row should be ID, Sequence, BytesUsed, Chunk.
 They BytesUsed allows you to keep exact byte sizes.  All Chunk data
 types should be a raw byte data type.  There should be multiple chunk
 column types:  chunk24, chunkM, and chunkXM for 1024 bytes, 1Meg, and
 10Meg chunk column types.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Richard Hipp
On Fri, Apr 26, 2013 at 12:26 PM, Stephen Chrzanowski
pontia...@gmail.comwrote:

 Streaming a chunk of data as one huge thing is going to be
 faster in regards to writing and reading.


That depends.  See http://www.sqlite.org/intern-v-extern-blob.html


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Simon Slavin

On 26 Apr 2013, at 5:26pm, Stephen Chrzanowski pontia...@gmail.com wrote:

 ALL THAT SAID, I doubt it'd get implemented

I'm also in this category.  In fact I hope it doesn't get implemented.  Yes, 
technically it can be done.  But it's the sort of thing people assign as 
Computer Science homework.

If you're running SQLite you already have a file system.  I don't see any point 
in trying to squeeze a file system inside a DBMS inside a file system.  And 
SQLite, having been designed to be 'lite' in the first place, isn't a good 
place to do it.  That task stops only when you've implemented Posix.

If anyone is interested in pouring time into features which would be a good 
natural fit for SQLite4, here are two ideas:

A) The features currently built into FTS4 should be expressible in a more 
SQL-like way, and it should be possible implement searching many long text 
fields for combinations of kinds of text in a better way.  Devise storage 
format for something like

CREATE TOKENTABLE TABLE myTokenTable FOR myTable USING SEPARATORS (list here) 
AND STOPLIST (list here)

which uses the existing rowids in myTable to indicate which row of the 
underlying table a token appears in.  Work out semantics to allow for the 
updating of a token table after a transaction in which the underlying table is 
updated.  Work out semantics for searching the token table and the underlying 
table which are an elegant fit for SQLite3 syntax and English.  If you have 
time, write the code.

B) Work out a data structure which uses existing SQLite4 data types and can be 
used to store a table definition including all the things SQLite4 allows in the 
CREATE TABLE command.  In other words, if you were to store the information in 
the CREATE TABLE command in a structured way instead of as the commandstring, 
how would you do it ?  Your design should allow for easy implementation of some 
of the ALTER TABLE commands which SQLite3 doesn't support.  Pay great attention 
to being able to spot if command should not be completed because it breaks the 
schema.  If you have time, provide code which allows for conversion between 
string and structured representations of the CREATE TABLE command, perhaps 
using the Lemon parser for commandstring--structure.  You do not need to worry 
about ALTER TABLE commands running quickly: assume that if one is used it can 
take as long as needed to run.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Roland Hughes
It actually does make sense to add chunking to sqlite.  There would be
some computational overhead, but, that all depends on the chunk size and
the cache size of the database.  It makes no sense to implement YAFS
(Yet Another File System) inside of SQLite.

While many here view SQLite only in terms of desktop applications, the
reality is it gets used in embedded systems for data gathering and image
processing.  Some of these systems gather data in real time, and others
poll devices periodically to obtain a unit of work.  The devices
generating the data have completely different operating systems and even
different Endianism than the data collection system.  These units of
work are eventually uploaded to yet another system (usually midrange or
mainframe) where they are processed into industrial strength database
systems in a much more granular fashion.  Chunking of data allows for
units of work to be arbitrary sizes.

I understand that many of you reading this may not grasp the application
so I will bastardize some real life stuff you might be able to picture.
Many of you probably run BOINC and participate in some noble research
project with the idle time of your computer.  (If you don't, you
should.)  No matter the project, they bust up massive amounts of data
into chunks.  Somewhere a table in a database identifies each chunk, the
date it was collected, processed, who processed it, a corresponding
results chunk, and some summary result information fields.  When your
BOINC client connects with the server it scans the database to identify
the next available chunk or chunks, assigns them to you, then sends the
chunks to your client for processing.  The database and the client do
not care about the content of the chunk, just its size and transmission
CRC.  For lack of a better description, the client plug-in for the
project is the only piece which knows about the content of the chunk and
how to process it.

It should be possible to add chunking to the database itself in such a
manner that any user who does not actually use blobs in their database
does not pay a computational penalty for the feature.

There is an ever increasing number of embedded systems which would like
to use a linkable database, but stumble when it comes to raw data
storage.

Just my 0.02.




On Fri, 2013-04-26 at 18:16 +0100, Simon Slavin wrote:

 On 26 Apr 2013, at 5:26pm, Stephen Chrzanowski pontia...@gmail.com wrote:
 
  ALL THAT SAID, I doubt it'd get implemented
 
 I'm also in this category.  In fact I hope it doesn't get implemented.  Yes, 
 technically it can be done.  But it's the sort of thing people assign as 
 Computer Science homework.
 


-- 
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net

No U.S. troops have ever lost their lives defending our ethanol
reserves.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-25 Thread Dimiter 'malkia' Stanev



On 4/23/2013 3:17 PM, James K. Lowden wrote:

On Tue, 23 Apr 2013 10:28:35 -0400
Richard Hipp drh-czdrofg0bjidnm+yrof...@public.gmane.org wrote:


In summary:  No, a bare SQLite blob does not provide file-system
semantics.  But you can write a wrapper library around SQLite that
does provide file-system semantics for large blobs, and doing so
would have many advantages and be a worth-while project, I think.


A better and simpler answer IMO would be FUSE
(http://fuse.sourceforge.net/) backed by SQLite.  That gives you true
file semantics, not that that's exactly a step forward.  ;-)

--jkl



And on Windows one can use Dokan:
http://dokan-dev.net/en/download/

Or this commercial offering:
Callback file system (I'm not affiliated with them)
http://www.eldos.com/cbfs/

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-25 Thread Stephen Lombardo
On Tue, Apr 23, 2013 at 6:17 PM, James K. Lowden
jklow...@schemamania.orgwrote:

 On Tue, 23 Apr 2013 10:28:35 -0400
 Richard Hipp d...@sqlite.org wrote:

  In summary:  No, a bare SQLite blob does not provide file-system
  semantics.  But you can write a wrapper library around SQLite that
  does provide file-system semantics for large blobs, and doing so
  would have many advantages and be a worth-while project, I think.

 A better and simpler answer IMO would be FUSE
 (http://fuse.sourceforge.net/) backed by SQLite.  That gives you true
 file semantics, not that that's exactly a step forward.  ;-)


The Guardian Project has been doing some great work recently on libsqlfs,
which provides POSIX style file access using an SQLite database. It can
either be used as a standalone library or as a FUSE module:

https://github.com/guardianproject/libsqlfs

Cheers,
Stephen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-23 Thread Richard Hipp
On Mon, Apr 22, 2013 at 7:39 AM, Paolo Bolzoni 
paolo.bolzoni.br...@gmail.com wrote:

 I noticed that sqlite3_blob_write cannot increase the size of the pointed
 open blob. So I ask, there is a way to treat a blob as a stream so I can
 write
 or read values in it with ease?


Short answer:  No.

SQLite packs information tightly together to conserve space in the database
file.  So if you have a row with 5 columns, A through E, then the content
of those 5 columns are immediately adjacent to each other.  If you increase
the size of the blob in column C, then content of columns D and E have to
be moved out of the way.  Similarly, multiple rows are packed tightly
together, so if you increase the size of the blob in column E, than means
subsequent rows have to be moved out of the way, or else the entire row
being changed has to be rewritten into a different spot in the file.

Going even further into the weeds: even if you happen to have some free
space in file after the current row and the blob is the last column in the
row, due to some encoding decisions that were made way back in 2004, we
cannot increase the size of a row without rewriting the entire row.  To do
otherwise would result in an incompatible file format.

So, no, you cannot arbitrarily increase the size of a BLOB using
sqlite3_blob_write().  However, as Roger mentioned, you can populate the
blob with empty space initially using zeroblob() and then overwrite it with
the content you need, incrementally.

Or, if you really want file-system semantics on top of your database (and
this is a reasonably idea, actually) you could write a wrapper library that
implemented file-system semantics on top of SQLite.  I propose an API
something like this:

  SQLFILE *sqlite3fs_fopen(const char *zPath, const char *zMode);
  size_t   sqlite3fs_fread(void*, size_t, size_t, SQLFILE*);
  size_t   sqlite3fs_fwrite(const void*, size_t, size_t, SQLFILE*);
  int  sqlite3fs_fseek(SQLFILE*, size_t offset, int whence);
  size_t   sqlite3fs_ftell(SQLFILE*);
  void sqlite3fs_rewind(SQLFILE*);
  int  sqlite3fs_ftruncate(SQLFILE*, size_t);
  int  sqlite3fs_rename(const char *zFrom, const char *zTo);
  int  sqlite3fs_unlink(const char *zPath);
  int  sqlite3fs_stat(const char *zPath, struct stat*);
  SQLDIR  *sqlite3fs_opendir(const char *zPath);
  dirent  *sqlite3fs_readdir(SQLDIR*);
  int  sqlite3fs_closedir(SQLDIR*);

The files in this wrapper could be stored as one or more page-sized
chunks in separate rows of some backing-store table with a designated
name.  The multiple small blobs could then be pulled together transparently
by the sqlite3fs_fread() routine.  And large writes would be split apart
into several separate INSERTs by sqlite3fs_write().  Since each chunk is a
separate row, files could be increased or decreased in size simply by
adding and deleting rows from the backing store table.  Changing or adding
a single byte to a file means rewriting the last chunk, but for a large
file that is a lot cheaper than rewriting the whole file.  Note also that
real filesystems also have to write in chunks because disk drives work in
chunks.  There really is no such thing as adding a single byte to a file -
you always really overwrite the whole last page.

Putting together a library such as outlined above would provide a
convenient and simple migration path for applications that wanted to
transition from a pile-of-files storage format toward using an SQLite
database as their application file format.

Imagine what could happen if, for example, git where to start using this
library.  Instead of the .git directory containing lots and lots of
separate files, your repository would be a single .git file which was
really an SQLite database accessed through the sqlite3fs wrapper.
Minimal changes (renaming file I/O routines) would be required to the git
core.  But once you have an SQLite database used in place of a
pile-of-files, you can start adding new tables to do lots of interesting
things like hosting bug reports or a wiki as part of your repository (as
Fossil does).

As another example, consider the OpenOffice presentation application whose
current file format (the *.odp file) is a ZIP archive containing lots of
image and XML files.  If the *.odp file where replaced by an SQLite
database with the sqlite3fs wrapper, then you would be able to make
incremental changes to an OpenOffice presentation, instead of having to
recreate the entire multi-megabyte ZIP archive in the *.odp file simply
because of a one-character change on one slide.  And your changes would be
atomic, obviating the need for those annoying backups that OpenOffice makes
periodically.

In summary:  No, a bare SQLite blob does not provide file-system
semantics.  But you can write a wrapper library around SQLite that does
provide file-system semantics for large blobs, and doing so would have many
advantages and be a worth-while project, I think.

If anyone reading this wants to undertake such a 

Re: [sqlite] Writing in a blob

2013-04-23 Thread Stephan Beal
On Tue, Apr 23, 2013 at 4:28 PM, Richard Hipp d...@sqlite.org wrote:

 ...implemented file-system semantics on top of SQLite.  I propose an API
 something like this:

   SQLFILE *sqlite3fs_fopen(const char *zPath, const char *zMode);


For anyone wanting to take that (admittedly large) step, i have a working
embedded filesystem library which essentially does what is described
here. It is not backed by an sqlite3 db (but by an abstract i/o device
interface), but it could be used as a basis for creating such an
sqlite-backed beast:

http://fossil.wanderinghorse.net/wikis/whio/?page=whio_epfs

If anyone reading this wants to undertake such a project, I can provide you
 with technical advice, website space with a suitable domain name (ex:
 filesystem.sqlite.org), and a Fossil repository to keep the code in.


http://fossil.wanderinghorse.net/repos/whio/

:-D

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-23 Thread Teg
Imagine what could happen if, for example, git where to start using this
library.  Instead of the .git directory containing lots and lots of
separate files, your repository would be a single .git file which was
really an SQLite database accessed through the sqlite3fs wrapper.

I use Sqlite for a git like version control system for my own
projects. I also use it for storing image files similar to CBR/CBZ
files. Having random access to the files makes it a better solution
than CBR/CBZ files because, they're basically RAR and ZIP files with
no random access. In general, there are many benefits to keeping a
bunch of smallish files in a DB you can move around and change
atomically.

I  do tend to keep blobs in their own database files and the meta-data
in  another file. The blobs seem to impact the performance of the meta
data tables when they're combined into a single DB.

I like the idea of a general purpose File IO wrapper over Sqlite, have
no need for it to be built in though.


C




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-23 Thread Jay A. Kreibich
On Tue, Apr 23, 2013 at 10:28:35AM -0400, Richard Hipp scratched on the wall:

 Or, if you really want file-system semantics on top of your database (and
 this is a reasonably idea, actually) you could write a wrapper library that
 implemented file-system semantics on top of SQLite.

  As a side note, almost every video game console developer out there
  has some type of filesystem in a file library for dumping models,
  textures, animation, and level data into large chunks that can be
  streamed off optical drives.  Optical systems really, really suck
  at seeks, so you tend to batch all the required data into a single
  mass, and the stream the whole thing off the disc when (for example)
  someone loads a new level.  You really don't want to get the on-disc
  filesystem involved in loads if you can avoid it.

  Of course, since the data is read-only and the main goal is to avoid
  seeks, an SQLite solution would be a Very Bad Idea.  Rather, it is
  common to base the embedded filesystem off an archive format, like
  tar.  Using IFF files was also popular back on the cartridge days.

   -j

-- 
Jay A. Kreibich  J A Y  @  K R E I B I.C H 

Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable. -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-23 Thread James K. Lowden
On Tue, 23 Apr 2013 10:28:35 -0400
Richard Hipp d...@sqlite.org wrote:

 In summary:  No, a bare SQLite blob does not provide file-system
 semantics.  But you can write a wrapper library around SQLite that
 does provide file-system semantics for large blobs, and doing so
 would have many advantages and be a worth-while project, I think.

A better and simpler answer IMO would be FUSE
(http://fuse.sourceforge.net/) backed by SQLite.  That gives you true
file semantics, not that that's exactly a step forward.  ;-)  

--jkl

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Writing in a blob

2013-04-22 Thread Paolo Bolzoni
I have a table with a column of type BLOB, and I use a library that
exports or imports
its data structure writing or reading from a FILE*.
(see the bottom of this page:
http://gmplib.org/manual/I_002fO-of-Integers.html#I_002fO-of-Integers )

So my idea was creating a FILE* wrapper around sqlite3_blob_*
functions using fopencookie (in linux) and funopen (in osx).

But I noticed that sqlite3_blob_write cannot increase the size of the pointed
open blob. So I ask, there is a way to treat a blob as a stream so I can write
or read values in it with ease?

thanks,
Paolo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-22 Thread Simon Slavin

On 22 Apr 2013, at 12:39pm, Paolo Bolzoni wrote:

 But I noticed that sqlite3_blob_write cannot increase the size of the pointed
 open blob. So I ask, there is a way to treat a blob as a stream so I can write
 or read values in it with ease?

Unfortunately the size (length) of the BLOB is very significant to the larger 
space-handling aspects of SQLite's file format.  Making a BLOB longer could 
force SQLite to move the data from page to page and do lots of other internal 
reorganisation.  So you can reserve extra space when you write the BLOB, and 
you can read whatever you want, but the documentation is accurate.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-22 Thread Paolo Bolzoni
It won't be as easy, but I guess I need to get the size of the gmp integer
before so to allocate the blob and later writing in it.

On Mon, Apr 22, 2013 at 2:10 PM, Simon Slavin slav...@bigfraud.org wrote:

 On 22 Apr 2013, at 12:39pm, Paolo Bolzoni wrote:

 But I noticed that sqlite3_blob_write cannot increase the size of the pointed
 open blob. So I ask, there is a way to treat a blob as a stream so I can 
 write
 or read values in it with ease?

 Unfortunately the size (length) of the BLOB is very significant to the larger 
 space-handling aspects of SQLite's file format.  Making a BLOB longer could 
 force SQLite to move the data from page to page and do lots of other internal 
 reorganisation.  So you can reserve extra space when you write the BLOB, and 
 you can read whatever you want, but the documentation is accurate.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-22 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 22/04/13 04:39, Paolo Bolzoni wrote:
 But I noticed that sqlite3_blob_write cannot increase the size of the
 pointed open blob. So I ask, there is a way to treat a blob as a stream
 so I can write or read values in it with ease?

This is discussed in:

  http://www.sqlite.org/c3ref/blob_open.html

It mentions using zeroblob() so you can create the blob of the length you
want and then use blob_open/read/write/close to modify the contents.

Roger


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAlF1gC0ACgkQmOOfHg372QRJRACcCBv5J2DS6Eor3R49CDw2bgEm
kIAAn0jyOaxhaAiFD1IemwFF84/kaR41
=jxCT
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users