Joe Wilson wrote:
--- Richard Klein <[EMAIL PROTECTED]> wrote:
In implementing xLock in a VFS, do we need to worry
about lock counts, i.e. nested locking?
In other words, if a process asks for, say, a SHARED
lock, and he already has one, should we increment a
SHARED lock count? Or is it okay to just return,
i.e. to treat the request as a no-op?
See comments for unixLock() and unixUnlock() in os_unix.c.
/*
** An instance of the following structure is allocated for each open
** inode on each thread with a different process ID. (Threads have
** different process IDs on linux, but not on most other unixes.)
**
** A single inode can have multiple file descriptors, so each unixFile
** structure contains a pointer to an instance of this object and this
** object keeps a count of the number of unixFile pointing to it.
*/
struct lockInfo {
struct lockKey key; /* The lookup key */
int cnt; /* Number of SHARED locks held */
int locktype; /* One of SHARED_LOCK, RESERVED_LOCK etc. */
int nRef; /* Number of pointers to this structure */
};
...
/* If a SHARED lock is requested, and some thread using this PID already
** has a SHARED or RESERVED lock, then increment reference counts and
** return SQLITE_OK.
*/
if( locktype==SHARED_LOCK &&
(pLock->locktype==SHARED_LOCK || pLock->locktype==RESERVED_LOCK) ){
assert( locktype==SHARED_LOCK );
assert( pFile->locktype==0 );
assert( pLock->cnt>0 );
pFile->locktype = SHARED_LOCK;
pLock->cnt++;
pFile->pOpen->nLock++;
goto end_lock;
}
I think you're referring to the 3rd & 4th lines from the bottom of this
code snippet:
pLock->cnt++;
pFile->pOpen->nLock++;
True, these lines increment lock counts, but not for the reason I was
worried about in my original post, i.e. not for the purpose of keeping
track of *nested* locks held by a single file descriptor.
The statement 'pLock->cnt++;' is merely keeping track of the number of
SHARED locks currently held on the file in question. This is important
information. For example, if the file is currently in the SHARED state
(pLock->locktype==SHARED_LOCK), and the last SHARED lock is released by
a file descriptor, we need to realize that the file is now UNLOCKED.
Or, suppose the file is SHARED and a file descriptor wants to acquire
an EXCLUSIVE lock on the file. Well, if there is only one SHARED lock
currently held on the file (pLock->cnt==1), and the file descriptor in
question is the one who holds it (pFile->locktype==SHARED_LOCK), then
there is no harm done in promoting that SHARED lock to EXCLUSIVE.
The statement 'pFile->pOpen->nLock++;' is incrementing the lock count
in the 'openCnt' struct for the database file in question:
/*
** An instance of the following structure is allocated for each open
** inode. This structure keeps track of the number of locks on that
** inode. If a close is attempted against an inode that is holding
** locks, the close is deferred until all locks clear by adding the
** file descriptor to be closed to the pending list.
*/
struct openCnt {
struct openKey key; /* The lookup key */
int nRef; /* Number of pointers to this structure */
int nLock; /* Number of outstanding locks */
int nPending; /* Number of pending close() operations */
int *aPending; /* Malloced space holding fd's awaiting a close() */
};
The 'openCnt' struct is necessary to work around another weirdness
of POSIX advisory locks:
** If you close a file descriptor that points to a file that has locks,
** all locks on that file that are owned by the current process are
** released. To work around this problem, each unixFile structure contains
** a pointer to an openCnt structure. There is one openCnt structure
** per open inode, which means that multiple unixFiles can point to a single
** openCnt. When an attempt is made to close a unixFile, if there are
** other unixFiles open on the same inode that are holding locks, the call
** to close() the file descriptor is deferred until all of the locks clear.
** The openCnt structure keeps a list of file descriptors that need to
** be closed and that list is walked (and cleared) when the last lock
** clears.
Cheers,
- Richard Klein
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------