Christian Smith wrote:
Bogus�aw Brandys uttered:
[EMAIL PROTECTED] wrote:
Mikey C <[EMAIL PROTECTED]> wrote:
Please implement table and row level locking. :-)
People commonly believe that doing so must be easy. I
certainly get a lot of requests for it from people who
think they know how. But in fact, row-level locking
is extraordinarily difficult. To my knowledge, nobody has yet come
up with a way to do it unless you:
* Add a server process to coordinate access.
* Accept that the database might be corrupted if
an application crashes while writing.
* Make writing to the database very, very slow.
* Double the size of the database file.
If you think you know a way to implement row-level
locking that does not impose one of the above
limitations, then please tell me and I will look
into the matter.
I'm not an expert but let me propose such solution:
Multi- Generational Architecture like Interbase/Firebird can cope with
all except the last point (double size of database file) - however in
the last case periodical vaccum could shrink database.
There is also no need to have server process - each instance of sqlite
library could be a server.
Shared lock manager could be required or simply each instance of
sqlite library could have separate one.
Shared lock manager is tricky idea (maybe it could be for example dumb
manager in each sqlite library with shared memory pool and if one
instance terminate another one could detect it and play that role?)
In fact that is as I fairy know how it's implemented in Firebird
Classic Server (where each server process has separate lock manager I
suppose)
This classic server processes are spawn by xinetd deamon.
I see sqlite in very similar manner : sqlite library is attached to
each spawned process which uses it.
In order to communicate with the other lock managers, all instances of
the SQLite library would have to be on the same box.
Or share the same lock data for example within sqlite database special
table (internal like sqlite_master) In that case problem is to serialize
access to lock data ,but we are talking about MG architecture where
pessimistic locks are rare.
If you want MVCC without process communication (as not all processes
would be on the same box) you'd need each row update to be synchronous
and synced, which would be slower than what we have now.
Here I don't quite understand.I thought that MG architecture use
transaction manager to manage transactions. There is not need to sync
row update because each row has many record versions (and old committed
are not removed until vacuum for example) each one with transaction ID
and stamp
Problem: need to serialize transaction manager if working from
concurrent computers on the same database
Problem: without vacuum there is more and more garbage inside database
The locking protocol could maybe be changed to allow locking at the
table level, but such a change would be incompatible with the current
locking protocol. And how do you manage multiple rollback journals for
multiple writers? A sort of table level locking is already possible
anyway using attached databases.
This is all about locking (pessimistic) not about MG architecture.
I can't see this being a feasible project.
Hey! As I stated I'm not an expert. ;-)
Regards
Boguslaw Brandys