If you want to share a file you have to be able to synchronize access in
some way. The POSIX type file locks permit you to do it quite well by
giving read and write locks.
If you want shared access to a file from multiple processes you either
need some form of co-operative lock like a semaphore or a safer method
such as the POSIX locks which enforce co-operation. If you are sharing
the resource among multiple threads in a single process you have the
option of using a co-operative lock like a mutex/critical section or to
use the POSIX locks and not bother with all the co-op logic.
Underneath the covers Sqlite does an fopen on the file and fcntl's for
locking. It shares the file just as any application would share access
to a file.
If you want different, use a server based RDBMS like PostgreSQL. By
having a central server it can do what it likes to implement
synchronization.
Ken wrote:
If it is a replacement for fopen, then why does it perform locking at all?
Since sqlite has implemented threading and multiple connections, then the next logical step in its evoloution is to enable advanced locking techniques for even greater degrees of improved concurrency.
Ken
John Stanton <[EMAIL PROTECTED]> wrote: If Sqlite were to implement its own locking then it would lose
simplicity and versatility.
A good way to regard Sqlite is a replacement for fopen. It is a way an
application can access a single file. If you want more than that you
are not looking for "'lite" and should go straight to PostgreSQL rather
than trying to convert Sqlite into PostgreSQL, Oracle or DB/2.
Ken wrote:
Regarding the locking:
Yes certainly only within a single process architecture. I understand that SQLITE is usein g file locks to co-ordinate multiple process locks for unix is fcntl. (Fcntl is an ugly beast, imho sqlite would be better served managing locks internally).
I guess there are really two camps of uses for sqlite:
Users that build based upon a multi process system.
And embedded users who build a single process with multiple threading.
So is sqlite geared towards multiple process locking (current design says it is).
Why not enable it to also suppport multiple thread with a higher concurrency level by using page level locking (or row level for that matter).
Either way I think its a great piece of software. Thanks DRH.
Christian Smith wrote: Ken uttered:
Would it be possible to implement a Page level locking system using
sqlite?
Deadlock detection/recovery and/or prevention is much more complex for
page level locking. As there is no shared data between SQLite clients
other than the database file itself, tracking locks in other clients is
impossible beyond the OS level locking.
A single process with multiple threads could feasibly track page level
locking, but it would be intra-process only, and would complicate the
locking significantly for relatively little benefit.
Note that a writer can already operate concurrently with readers until it
fills it's cache, and only has to have exclusive use of the database when
synching it's cache on spill or commit.
I seem to recall a commercial product Ingress that utilized a page level
locking scheme that would escalte to a table level lock when over a
certain % of a tables pages were locked.
If you want multiple concurrent writers, use PostgreSQL instead. It uses
MVCC to avoid even row level locks. And it has an embeddded SQL
pre-processor to boot.
This obviously would be very beneficial for those of us who are
implementing threaded applications using sqlite. The shared cache
concept would need to be extended so that a global context for the
shared cache would be maintained, allowing each individual thread that
connects (sqlite3_open) to utilize the shared cache for that DB.
The benefit would be to move the Locking from the DB level down to the
page level. Keep a list of the before image pages in memory so that
readers are not blocked by the writers (lock holders) of the pages.
When the writer txn completes, Mark it txn as completed in the journal.
I think it would be beneficial to visit the journal design as well for
this. Mainly don't open/close journals, since a journal is always part
of the DB pre-build the journal and allow multiple threads of writers
into the journal. This should also help with some of the areas inside
the pager/journal code where a random checksum is used to validate the
journal upon crash recovery.
Just some ideas to improve concurrency that I had and wanted to jot
down.
The current journal implementation is simple and fast. Using a single
rollback journal, rather than a redo-undo journal minimizes the amount of
data that has to be written, and the moves the needs for asynchronous
commits that would be required for any other journal design. Consider the
alternatives for a minute:
- Current journal design:
Pros: simple, fast for single writer big transactions.
Cons: Single writer only. Writer block readers. D in ACID requires
multiple file syncs.
- Write Ahead Redo-Undo log:
Pros: Potentially multiple writers, ACID with single file sync.
Cons: Difficult to coordinate between multiple processes. Requires
async process to write log entries to database file.
- Write Ahead Redo log:
Pros: Simpler than Redo-Undo log. ACID with single file sync.
Cons: No better write concurrency than current journal. Still requires
async process to write log entries to database file.
DRH, just reviewd the tokenize.c and pager.c, I concurr that Yacc/bison
have the token passing incorrect. Your implementation of tokenize is
very interesting, what an excelent technique to embedd the tokens using
overlapping strings.
Can you send me an email address, I have some code that you might find
intersting to utilze in the pager.c for the checksums.
Just post it to the list. Perhaps use the ticket tracker and attach your
patch(es) to a ticket.
Christian
--
/"\
\ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X - AGAINST MS ATTACHMENTS
/ \
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------