> User 1 runs a process that does a READ of the record. Then User 2 runs a
> process that does a READU on the record. User 2 files the records and
> updates the information in the record. Now User 1 files is process and
> updated the record with the information back.

The UniVerse locking system is voluntary in the sense that applications can
ignore it completely. The system should never have allowed writes or deletes
without a lock and the ISOMODE configuration parameter allows this rule to
be enforced.

Applications that write or delete without locks are asking for trouble.


There is another way in which this user's problem might arise though it is
very unlikely....

UniVerse does not store the whole of a long record id in the lock tables.
Instead, it keeps the first 63 characters, followed by a single byte formed
from an exclusive-or of all the following bytes. The impact of this is that
UV cannot distinguish two ids that are identical in the first 63 bytes and
yield the same merged value for the remaining bytes.

Try this simple test....

1. Create a modulo 1 file (see below for why it needs to be modulo 1).

2. Add two records with ids that have the same value in the first 63 bytes
followed by AB for one record, BA for the other.

3. In one process, read and lock one of these records. Another process will
not be able to read and lock the second record.

OK, this is not too bad as all that is happening is that the second process
thinks the record is locked when it isn't.

4. Now read and lock both records in the one process. Do a quick LIST.READU
and notice that there is only one lock.

5. Release the lock on one record. LIST.READU now shows that there are no
locks when the program quite reasonably thinks it still has the other record
locked.

Other users can now freely update this record.

Things are not actually as bad as they sound because the two records must
also be in the same group (hence the modulo 1). However, even with bigger
files, it is possible to fall over this one by accident.

Consider a file that is keyed by something long like a web address. The
application designer, not really understanding file types, chooses to user
type 2, 3, 4 or 5. These base the hashing process on only the first few
characters of the data. Two ids that meet the criteria above will
necessarily be in the same group and hence the problem occurs.

Sounds unlikely? I've seen it happen.


Martin Phillips
Ladybridge Systems
17b Coldstream Lane, Hardingstone, Northampton NN4 6DB
+44-(0)1604-709200
-------
u2-users mailing list
[EMAIL PROTECTED]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to