"Rosemary Alles" <al...@ipac.caltech.edu> schrieb im
Newsbeitrag news:57c55bd7-8d56-4913-adce-cbb2978dd...@ipac.caltech.edu...

> > What do you mean with "cores"?
> > Are these multiple client *machines* - or do we talk about
> > a sinlge client-machine with multiple (cpu-)cores here?
>
> Multiple machines with multiple cpus.
Ah Ok, that clears things up a bit.

> > In case you mean only "several cores" on a single client-machine -
> > why is your DB "behind" NFS at all (and not on a local disk)?
>
> N/A - see above - several machines.
So what you really want is a DB-Server implementation
in that case - and that's what SQLite does not support
"out of the box".

Working in "faked server-mode" against a share is in no
way satisfying under heavy "write-load", because neither
the smb-protocol nor NFS are well suited for DB-typical
transfers.

Nonetheless one can wrap the SQLite-engine behind a
dedicated AppServer-instance (which works always
locally against its SQLite-DB-Files, "visible" directly
only to that Server-Instance or -Host).

And if the communication against this Server-instance
is then done over sockets again, but with better suiting
protocols (transferring "disconnected" Resultsets - and
splitting up your larger writejobs against the Server-instance
into smaller pieces), then SQLite performes very well -
as I already posted in an earlier reply to you.

> We achieve neither, they are both (read and write - not done
> simultaneously, i.e. never read when writing and vice versa) ...
That's normal in SQLite. If there's a "Writer-lock" currently, then
also the Readers (Reader-Threads or -Processes) are blocked.
But in case you are working locally against your DB-File,
these Writes (when done in smaller chunks) usually block
the DB (and eventually waiting Readers) only for milliseconds.

And in your case (using NFS or SMB) the whole locking -
and also the release of these locks is very costly.
So I'm not that sure, if the main-culprit here is your
current busy-handling (although you could probably speed
up the whole thing a bit, when you find a better-working
"wait-strategy" in case of sqlite_busy-errors) - but first you
need to get rid of these protocols - and use SQLite locally,
from inside a (multithreaded) Server-instance on a dedicated
Host.

> > That does not mean, that your Inserts will be slow (if only
> > one process/thread or core will handle them) - after all you
> > perform your DB-writes against a single resource (your disk).
> > And whilst working against a local disk, SQLite can achieve
> > ca. 50000-200000 inserts per second, depending of course
> > on the Column-Count and if the underlying table has indexes
> > defined on its columns. In my tests I can achieve ca.
> > 120000 inserts per second on a table with 8 "mixed-type"
> > Columns (having no indexes defined on it - normal 7200rpm
> > SATA-HardDisk).
>
> Obviously, we are doing something weird, or our NFS
> configuration is strange or whatever.
I'd say, what you currently see is "normal" (more or less).

> The disk is obviously not local
Yep - I was aware of that - what I wanted to show with
the timings above was, what timeouts you should expect
in case you do smaller Write-Jobs against your DB in
a locally working Server-instance (against a local Disk).
Insert 1000 new Records? - will block the Readers for
ca. 10-20msec - then reading can be done again in
parallel by the reader-threads.
And the usual relation between read- and write-requests
is more 90/10 in a typical "business-DB"-scenario (if not
less) - and also these just mentioned "1000 Inserts at once"
don't come up that often.

> The insert speeds you specify are more than likely
> heavily dependent on configuration/ setup.
> Are you also working on multiple machines with multiple
> cpus over NFS?
No - this is (more or less) the "out-of-the-box" performance
of the SQLite-Engine, running on a "desktop-OS", working
against a normal SATA-Drive - wrapping the Inserts in
a Transaction - and using the bind-APIs to fill in the
"Param-Slots" of the next "Insert-Record".

As said - that's what you can expect, if you find a
good AppServer-implementation which wraps the
engine and works locally against the HardDisks.

> In this case, on a local disk, much less than a second
> per update.
See - there you are... ;-)

> However, scaled over the previously described
> "concurrent" scenario with several identical copies
> of the process (program) attempting to access the
> database over NFS from several computers with
> several cpus  - up to a minute per update - It's ridiculous.
Yep - as said - either find a good SQLite-(App-)Server-
wrapper for unixoid systems (maybe Real-SQLServer
or John Stantons implementation - or the Network-
engines which are mentioned in the SQLite-Wiki) -
or just switch to MySQL or PostgreSQL.

But reduce your expectations when working over
SMB or NFS with sqlite.

> Don't have those numbers handy, but will soon. The total
> size of current DB is up to 70mb.
Oh - in that case (and from what you reported above,
having your updates done in far under a second locally) -
I think I don't need more numbers.
That's a smaller DB, which should even fit into the File-
cache entirely (and also your Update-Volume seems
not that high).
Now I start wondering why you get such bad timings
(even when working over NFS) - until now thought you'd
move *much* more data around.

> My primary concern now is to prevent a dead-lock.
That seems to make sense now (I assume you're working
"near a deadlock" with your multipe-client-requests, not
going to sleep properly before the next retry).
Sorry, but cannot help with Code-Snippets, I'm not a
C-Coder - but we start with "Begin Immediate" in our
Write-attempts - in case of busy then a few rounds with
shorter sleep-intervals - then looping with a longer one,
but then after a while with again more and more shortened
sleep-intervals (to prevent starvation) - more or less the
same wait-strategy then in the readers.

Maybe just post your code - how you initiate your writes
and how you've implemented your current wait-loops
(regarding the timeout-handling) - I'm sure the cracks here
will find what's wrong.

Olaf



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to