Black, Michael (IS) wrote:

> Combining your transactions would probably help a fair bit.
> And I think you're sleeping WAY too long.

thanks.

> Besides upgrading your sqlite use sqlite3_busy_timeout() -- it will
> handle the sleeps for you and do it much faster than what you're doing
> (besides doing it automatically for you whenever it needs to).

Thanks. I didn't know about that. Hmm, I just had a look and it seems
that perl's DBD::Sqlite module (which I am using) sets the busy timeout
to 30s by default. That should be enough by itself.

> I take it that you're not seeing a lot of "locked" errors for a single
> user transaction???

I am because I am re-attempting the transaction when I see a
locked message and they tend to either succeed after 1-5
attempts or fail after 50 attempts. However, there are other
users' transactions happening at the same time. It not a
"database is locked(6)" error if that's what you mean.
Each process only has a single connection.

> With a bunch of users you're quite likely get one
> collision but I'd be surprised if it was more than one at the size
> you're talking about.

Well, there were 22 failures altogether yesterday and 8 the
day before that but it doesn't happen every day.

> You didn't say if you were exhausting your 50
> times or how many sleeps you ended up doing for any one person.

Success after 1-5 attempts or failure after 50 and that's
with perl's default 30s busy timeout (possibly, see below
for timing wierdness).

> If you use sqlite3_busy_timeout() and give it 25000 milliseconds (25
> seconds) I'll bet you don't see any "locked" messages any more plus
> you'll get faster user response times.

You would think so, wouldn't you :)

What I did notice was that I was only looping around the
execute phase and was getting locked errors for the prepare
phase so I started looping around that as well which seemed
to help. Time will tell.

Does the busy timeout apply to prepare as well as execute?
I assume it must.

> You might also find it useful to put a tic/toc function in your code
> and show the time to execute all the transactions in your audit log.

The strange thing is that I do log instances where I had to
loop after a locked error message and I log the total
duration but it doesn't entirely make sense.

The successes (after 1-5 locked messages) take about 30s
per loop which makes sense now that I know about perl's
sqlite module's default busy timeout (it's a hideously long
period of time, though) but the failures after 50 loops
finish after about 19s in total. That doesn't make sense.
Any theories about that?

> Michael D. Black
> Senior Scientist
> Northrop Grumman Mission Systems

I can upgrade to 3.6.22 (the version included in the latest
stable version of perl's DBD::Sqlite) and I can turn each page
into a single transaction. Hopefully, that will fix things.

Are 3.4.2 and 3.6.22 databases binary compatible? I noticed
some "potentially incompatible change" warnings in the
release history since 3.4.2 but the 35to36.html page states
that the database format is unchanged and the 34to35.html
page doesn't mention the database format at all (just the
C API). 3.6.22 seems happy to do an integrity check on a
3.4.2 database and query it but I'd like to know for sure
whether or not I need to rebuild the database. It seems
that they must be compatible or the 34to35.html page
would have stated that they weren't.

Thanks,
raf

> ________________________________
> 
> From: sqlite-users-boun...@sqlite.org on behalf of raf
> Sent: Sun 7/11/2010 11:37 PM
> To: sqlite-users@sqlite.org
> Subject: EXTERNAL:[sqlite] database is locked(5) too often - what am i doing 
> wrong?
> 
> Greetings,
> 
> I've been using sqlite-3.4.2 for a customer-only website for a few years
> now and the number of users has recently grown to the point where I'm now
> seeing too many "database is locked(5)" (i.e. busy) errors.
> 
> When my code gets a "database is locked(5)" error, it sleeps for a random
> period between 0.25-0.5 seconds and tries again. It repeats this up to 50
> times before giving up and I only see an error if all of the repeated
> attempts failed as well. I'm now seeing this error a lot.
> 
> My use of sqlite seems unable to cope with more than about 30-50 concurrent
> updaters. I'm not sure of the exact number. Is that typical? Bear in mind
> that every use of this website is recorded in the database for auditing
> purposes so all users are updaters.
> 
> Each page hit results in several database writes each of which is currently
> in a separate transaction. Is combining these into a single transaction likely
> to help or make matters worse? Or is this just an inappropriate use of sqlite?
> 
> I expect I'll have to migrate it to postgres but I thought I'd ask for 
> opinions
> here first.
> 
> Cheers,
> raf
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to