Re: [sqlite] need to write during a long read

2005-09-14 Thread Mark Drago
On Thu, 2005-09-08 at 14:36 -0400, D. Richard Hipp wrote:
> On Thu, 2005-09-08 at 10:48 -0400, Mark Drago wrote:
> > However, it seems that for every rollback that I do there is a file left
> > in the directory with the databases.  I have 30-something files named
> > like the following: 'ame_log.db-mj0E2E1262'.  ame_log.db is the filename
> > of the main log database.  The contents of the file are the full path to
> > the main log database's journal and the full path to the attached queue
> > database's journal.  Should something be getting rid of these files?
> > Has anyone else seen this?
> > 
> 
> Those are the "master journal" files.  They are used to make
> sure that commits to multiple databases occurs atomically.
> They should be deleted automatically.  I do not know why they
> are not being removed for you.  I will look into it.

I don't know if this is going to help tracking down this issue or not,
but it seems to take nearly a half a second for the "insert into queue;
begin transaction; move from queue to main log; delete from queue;
commit transaction" process to complete.  Could this have anything to do
with how the databases are being synced to disk?  I'm trying to think of
anything that I'm doing that could be weird or out of the ordinary and
running 'pragma synchronous=off' on the two attached databases is really
the only thing I can think of.  Any other ideas or things I should look
in to?

Mark.


signature.asc
Description: This is a digitally signed message part


Re: [sqlite] need to write during a long read

2005-09-08 Thread D. Richard Hipp
On Thu, 2005-09-08 at 10:48 -0400, Mark Drago wrote:
> However, it seems that for every rollback that I do there is a file left
> in the directory with the databases.  I have 30-something files named
> like the following: 'ame_log.db-mj0E2E1262'.  ame_log.db is the filename
> of the main log database.  The contents of the file are the full path to
> the main log database's journal and the full path to the attached queue
> database's journal.  Should something be getting rid of these files?
> Has anyone else seen this?
> 

Those are the "master journal" files.  They are used to make
sure that commits to multiple databases occurs atomically.
They should be deleted automatically.  I do not know why they
are not being removed for you.  I will look into it.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>



Re: [sqlite] need to write during a long read

2005-09-08 Thread Mark Drago
On Wed, 2005-09-07 at 17:20 -0400, Mark Drago wrote:
> On Tue, 2005-09-06 at 16:07 -0400, D. Richard Hipp wrote:
> > On Tue, 2005-09-06 at 15:49 -0400, Mark Drago wrote:
> > 
> > > 2. I could continue to write to the database in the single thread, but
> > > if the write fails, add the data to a queue and continue.  Then, when
> > > another piece of data has to be logged, and it succeeds, empty the queue
> > > and write all of the data from it into the sqlite DB. 
> > 
> > This is what I would do.  Except I would make the queue a
> > separate SQLite database which was connected to the primary
> > database using ATTACH.
> > 
> > Suppose the "queue" database is named "pending.db".  Then
> > when you open the primary database always immediately do
> > this:
> > 
> > ATTACH DATABASE 'pending.db' AS pending;
> > 
> > Then when you want to make a log entry do the following
> > statements:
> > 
> >INSERT INTO pending.log VALUES(...);
> >INSERT INTO main.log SELECT * FROM pending.log;
> >DELETE FROM pending.log;
> > 
> > When doing the above, abort after the first failure.
> > If the database is locked then the second statement will
> > fail, the DELETE will never occur and information will
> > accumulate in the "pending" database.  If the second
> > statement succeeds, then the information is subsequently
> > deleted from the pending database.
> > 
> > If you really want to make sure that the transfer from
> > pending to main is atomic, enclose the last two statements
> > in a transaction.

> The problem I'm seeing is that (I'm half guessing) after I run a
> rollback, when I try and run 'begin transaction' I get: 'cannot start a
> transaction within a transaction'.  As far as I can tell there is no way
> to get out of this part of my code without running either a rollback or
> a commit.  I've executed 'pragma synchronous=off' on this database as
> I'll be writing to it very often and the data isn't life-or-death
> important.  I don't think this should impact anything, but I figured I
> would share.  Any ideas?

Alright, I've solved this problem.  It makes perfect sense in hindsight.
Basically, the only part of this thing that will likely fail because the
database is busy is the 'commit transaction' statement.  However, I was
never checking to see if that failed.  Now, if the commit fails I run a
'rollback' (which shouldn't fail b/c the DB is busy) and the queue seems
to work as expected.

However, it seems that for every rollback that I do there is a file left
in the directory with the databases.  I have 30-something files named
like the following: 'ame_log.db-mj0E2E1262'.  ame_log.db is the filename
of the main log database.  The contents of the file are the full path to
the main log database's journal and the full path to the attached queue
database's journal.  Should something be getting rid of these files?
Has anyone else seen this?

Mark.


signature.asc
Description: This is a digitally signed message part


Re: [sqlite] need to write during a long read

2005-09-08 Thread Jay Sprenkle
On 9/7/05, Mark Drago <[EMAIL PROTECTED]> wrote:
> 
> 
> The problem I'm seeing is that (I'm half guessing) after I run a
> rollback, when I try and run 'begin transaction' I get: 'cannot start a
> transaction within a transaction'. As far as I can tell there is no way
> to get out of this part of my code without running either a rollback or
> a commit. I've executed 'pragma synchronous=off' on this database as
> I'll be writing to it very often and the data isn't life-or-death
> important. I don't think this should impact anything, but I figured I
> would share. Any ideas?


This is just a WAG but:

You said you checked that the transaction was always closed. You are doing a 
commit if it
succeeds right? Does the transaction perhaps not lock/unlock both databases?

If I were doing it I would store pending log messages in a queue in memory, 
or
design my application to wait and retry a small number of times. On my last
project I went by the rule "if you can't commit changes within 30 seconds
something is seriously broken". I would write to syslog if it got broken and 
abort.


Re: [sqlite] need to write during a long read

2005-09-07 Thread Mark Drago
On Tue, 2005-09-06 at 16:07 -0400, D. Richard Hipp wrote:
> On Tue, 2005-09-06 at 15:49 -0400, Mark Drago wrote:
> 
> > 2. I could continue to write to the database in the single thread, but
> > if the write fails, add the data to a queue and continue.  Then, when
> > another piece of data has to be logged, and it succeeds, empty the queue
> > and write all of the data from it into the sqlite DB. 
> 
> This is what I would do.  Except I would make the queue a
> separate SQLite database which was connected to the primary
> database using ATTACH.
> 
> Suppose the "queue" database is named "pending.db".  Then
> when you open the primary database always immediately do
> this:
> 
> ATTACH DATABASE 'pending.db' AS pending;
> 
> Then when you want to make a log entry do the following
> statements:
> 
>INSERT INTO pending.log VALUES(...);
>INSERT INTO main.log SELECT * FROM pending.log;
>DELETE FROM pending.log;
> 
> When doing the above, abort after the first failure.
> If the database is locked then the second statement will
> fail, the DELETE will never occur and information will
> accumulate in the "pending" database.  If the second
> statement succeeds, then the information is subsequently
> deleted from the pending database.
> 
> If you really want to make sure that the transfer from
> pending to main is atomic, enclose the last two statements
> in a transaction.

Thanks a bunch for this idea.  I've implemented this and I'm getting an
odd error.  The scenario is thus:

I insert the log into the attached database:
INSERT INTO pending.log VALUES(...);

I then begin a transaction:
BEGIN TRANSACTION;

If this does not return a SQLITE_OK, I just leave the log in the queue
and hopefully the next log attempt will be able to work it out.

Once in the transaction, I run the INSERT INTO log SELECT * FROM ...:
INSERT INTO main.log SELECT * FROM pending.log;

If this does not return a SQLITE_OK, I run 'ROLLBACK TRANSACTION;' and
leave that log data in the pending db.

Provided the move from the queue to the main log succeeds, I delete the
entries from the pending log:
DELETE FROM pending.log;

If this does not return a SQLITE_OK, I run 'ROLLBACK TRANSACTION;' and
leave that log data in the pending db.

Provided the delete succeeded, I run 'COMMIT TRANSACTION;' and all
should be well.


The problem I'm seeing is that (I'm half guessing) after I run a
rollback, when I try and run 'begin transaction' I get: 'cannot start a
transaction within a transaction'.  As far as I can tell there is no way
to get out of this part of my code without running either a rollback or
a commit.  I've executed 'pragma synchronous=off' on this database as
I'll be writing to it very often and the data isn't life-or-death
important.  I don't think this should impact anything, but I figured I
would share.  Any ideas?

Mark.


signature.asc
Description: This is a digitally signed message part


RE: [sqlite] need to write during a long read

2005-09-07 Thread Mark Drago
On Wed, 2005-09-07 at 20:25 +0100, Brandon, Nicholas wrote:
> 
> >take longer than 5 seconds.  These reads are being done by PHP using the
> >PDO driver for sqlite3.  My understanding of the problem is that the PHP
> >reader is holding a SHARED lock for longer than 5 seconds, so while the
> >C program can acquire a PENDING lock, it can not get the EXCLUSIVE lock
> >inside of 5 seconds and thus times out.
> 
> I've also used the PDO driver in the past and it took me a while to realise
> that the SHARED lock was not being released by PDO/SQlite under after it
> tried to retrieve the next row and failed. So in the following example loop:
> 
> (Run SQL select statement)
> while (fetchArray())
> {
> 
>/* Long time spent processing here */
> 
> }
> 
> The SHARED lock is held until the end of the while loop instead of releasing
> before the loop (which I was expecting). Obviously if you have a significant
> amount of processing in the loop, its not difficult to get writer
> starvation.
> 
> A question to those who know - when (in terms of which C API call) does
> SQLite release the SHARED lock after a read?
> 
> And a general survey to everyone... in your applications, what is the
> 'standard' practice to handle a SELECT statement that may return more than a
> few rows? Can temporary tables be used without still holding the
> database-level lock?
> 
> Sorry Mark, I've noticed I've diverged from your question a tad.

Nick,

Thanks for bringing this up.  We are doing a bunch of processing on each
row of data that comes out.  We're likely going to experiment with
sucking all of the rows into an array and working on it afterwards to
see if that benefits things at all.

Mark.

> Thanks
> Nick



signature.asc
Description: This is a digitally signed message part


RE: [sqlite] need to write during a long read

2005-09-07 Thread Brandon, Nicholas


>take longer than 5 seconds.  These reads are being done by PHP using the
>PDO driver for sqlite3.  My understanding of the problem is that the PHP
>reader is holding a SHARED lock for longer than 5 seconds, so while the
>C program can acquire a PENDING lock, it can not get the EXCLUSIVE lock
>inside of 5 seconds and thus times out.

I've also used the PDO driver in the past and it took me a while to realise
that the SHARED lock was not being released by PDO/SQlite under after it
tried to retrieve the next row and failed. So in the following example loop:

(Run SQL select statement)
while (fetchArray())
{

   /* Long time spent processing here */

}

The SHARED lock is held until the end of the while loop instead of releasing
before the loop (which I was expecting). Obviously if you have a significant
amount of processing in the loop, its not difficult to get writer
starvation.

A question to those who know - when (in terms of which C API call) does
SQLite release the SHARED lock after a read?

And a general survey to everyone... in your applications, what is the
'standard' practice to handle a SELECT statement that may return more than a
few rows? Can temporary tables be used without still holding the
database-level lock?

Sorry Mark, I've noticed I've diverged from your question a tad.

Thanks
Nick



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



Re: [sqlite] need to write during a long read

2005-09-06 Thread D. Richard Hipp
On Tue, 2005-09-06 at 15:49 -0400, Mark Drago wrote:

> 2. I could continue to write to the database in the single thread, but
> if the write fails, add the data to a queue and continue.  Then, when
> another piece of data has to be logged, and it succeeds, empty the queue
> and write all of the data from it into the sqlite DB. 

This is what I would do.  Except I would make the queue a
separate SQLite database which was connected to the primary
database using ATTACH.

Suppose the "queue" database is named "pending.db".  Then
when you open the primary database always immediately do
this:

ATTACH DATABASE 'pending.db' AS pending;

Then when you want to make a log entry do the following
statements:

   INSERT INTO pending.log VALUES(...);
   INSERT INTO main.log SELECT * FROM pending.log;
   DELETE FROM pending.log;

When doing the above, abort after the first failure.
If the database is locked then the second statement will
fail, the DELETE will never occur and information will
accumulate in the "pending" database.  If the second
statement succeeds, then the information is subsequently
deleted from the pending database.

If you really want to make sure that the transfer from
pending to main is atomic, enclose the last two statements
in a transaction.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>