On Sunday, 20 January, 2019 16:32, Thomas Kurz <sqlite.2...@t-net.ruhr> wrote:

>Just for curiosity: how do other DBMS (MySQL, etc.) solve this issue?
>I guess the keypoint is that no matter where the query comes from,
>the database files are always under control of the same process which
>then can take care of the correct order in which to read and write
>data.

>But the problem that the same database page cannot be written to from
>2 different statements (or being read while another statement is
>writing into it) should arise there as well, shouldn't it? However, I
>have never seen MySQL return some kind of "busy" (whatsoever it may
>be named there).

>As I said, just for curiosity - no offense against SQlite because I
>can well understand the problem that SQlite has to deal with :-)

The "other DBMS" to which you refer are all of a class called a Database 
Server.  As such they are accessed remotely and thus have huge latency between 
when "your application" requests for something to be done and when that request 
gets to the server so that it can be processed.  This means that these database 
engines have to be optimized for concurrency and this is done by using what is 
called "row level locking" in which locks apply to a specific row in a specific 
table.  You can indeed "write" the same page at the same time from a single 
process, you just need to make sure that the row of data that "A" is writing to 
is not the same row of data that "B" is writing to.

Furthermore, many client/server systems use the "opportunistic locking" model 
which is based on the "plow ahead at full speed but keep enough history to go 
"Oh Shit" when required" because it is probabilistically unlikely that your 
1000 clients will all try and modify the same thing at once and you only need 
to be able to detect that and go "Oh Shit" when necessary.  On the balance the 
"plow ahead and pray" (also known as opportunistic locking) works adequately 
for most things.  That is, it is probabilistically unlikely that your clients 
will all try to update the same row in the same table at the same time (a given 
customer can only be calling ONE of your 5000 customer service representatives 
to update their shipping address, for example, so having TWO agents submitting 
an update for the same row at the same time is extremely highly improbable).  
And if they do, you can raise the "Oh Shit" flag and cancel the transaction(s).

So a client/server architecture has massive latencies on the order of hundreds 
of milliseconds but uses very fine grained locking to allow thousands of 
turnarounds to occur simultaneously and the updates to "appear to a lay 
external observer" to be occurring concurrently.

SQLite3 however has latencies on the order of microseconds and does not need to 
utilize the overhead associated with managing such fine grained locking in 
order to obtain the same or better local throughput, and being a local database 
at the whim of one client application accessing the database, it does not have 
the same opportunity for opportunistic behaviour as does a client/server 
database which may be serving thousands of concurrent (but different) 
applications.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.





_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to