Re: [sqlite] Multi-process read_uncommited equivalent?

2010-02-05 Thread Jean-Christophe Deschamps


Be aware that the backup process will need to restart from zero after 
each write!

Now, if you can setup some kind of IPC between your two processes, then 
you could have the update process update the disk base and send 
identical data to the reader process, so the latter can update a memory 
copy.  The two processe can run as fast as possible without much 
interference, as memory updates are very fast.  The 50Mb is reasonable 
size for memory and would need loading only once at startup.

For this to work, you have to take steps to insert identical fields 
including rowids, no randomblob or local timestamps.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-process read_uncommited equivalent?

2010-02-05 Thread Pavel Ivanov
First of all there's no multi-process equivalent of read_uncommitted.
There's just no way to implement that. If you want such type of
behavior with your current application structure then you should use
some other RDBMS.

For SQLite though I'd suggest you to use some variation of you 3rd
approach: change the reader SQL to "select * from table where rowid>?
order by rowid limit 1". At the first execution bind 0 to the
parameter and on each next execution bind the rowid value from the row
you've just processed. This approach wouldn't consume memory to store
all rows, wouldn't require all rowids to be sequential and wouldn't
hold read lock for the time of processing.


Pavel

On Fri, Feb 5, 2010 at 12:22 PM, Paul Corke  wrote:
> I have two independent processes that open the same sqlite database.
> One inserts data into a table and the other reads from the same table
> and creates reports.
>
> The reader process does something like "select * from table" and then
> does some heavy processing between each call to sqlite3_step().  It can
> be minutes between the prepare() and finalize().
>
> If the writer process tries to update the table during that time, then
> it blocks (or times out) which is not what I want.
>
> I've read the docs on locking and it would appear that the behaviour
> given by pragma read_uncommitted would do what I want (and I understand
> the
> drawbacks) but I can't use shared-cache mode because of having two
> separate processes.
>
> The only solutions I can come up with are:
>
> 1) Run the "select * from table" and read all the transactions in to
> memory
>   straight away before processing.
>
> 2) Use the backup interface to copy the DB (maybe in to memory) and then
> run
>   the reader process on the copy.
>
> 3) Change the reader SQL to "select * from table where rowid=?" and then
>   re-bind and re-query for each row (using "select max(rowid)" first).
>
> None of these are particularly attractive to me, so is there a better
> way?
>
> Typical db files (with other tables) are in the region of 50Mb, typical
> datasets in the order of 100,000 rows.  Using Linux if that makes any
> difference; currently with sqlite 3.6.14.2, but I don't have a problem
> upgrading if necessary.
>
> TIA,
>
> Paul.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multi-process read_uncommited equivalent?

2010-02-05 Thread Paul Corke
I have two independent processes that open the same sqlite database.
One inserts data into a table and the other reads from the same table
and creates reports.

The reader process does something like "select * from table" and then
does some heavy processing between each call to sqlite3_step().  It can
be minutes between the prepare() and finalize().

If the writer process tries to update the table during that time, then
it blocks (or times out) which is not what I want.

I've read the docs on locking and it would appear that the behaviour
given by pragma read_uncommitted would do what I want (and I understand
the
drawbacks) but I can't use shared-cache mode because of having two
separate processes.

The only solutions I can come up with are:

1) Run the "select * from table" and read all the transactions in to
memory
   straight away before processing.

2) Use the backup interface to copy the DB (maybe in to memory) and then
run
   the reader process on the copy.

3) Change the reader SQL to "select * from table where rowid=?" and then
   re-bind and re-query for each row (using "select max(rowid)" first).

None of these are particularly attractive to me, so is there a better
way?

Typical db files (with other tables) are in the region of 50Mb, typical
datasets in the order of 100,000 rows.  Using Linux if that makes any
difference; currently with sqlite 3.6.14.2, but I don't have a problem
upgrading if necessary.

TIA,

Paul.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users