seth vidal wrote:
On Thu, 2008-02-21 at 16:17 +0100, Florian Festi wrote:

If it gets implemented that simple we can get deadlocked quite easily.

instance 1: lock repo1
instance 2: lock repo2
instance 1: lock repo2 -> wait
instance 2: lock repo1 -> wait -> deadlock


No.
instance 1: lock repo1, do what you have to do, unlock repo1
instance 2: lock repo2, do what you have to do, unlock repo2
instance 1: lock repo2, wait until instance 2 unlocks, continue.

How are we deadlocking there?

You're thinking of the lock being for the entire runtime of the program,
I'm thinking of the lock being for the time during which we are
WRITING/downloading info.

May be I do not understand the problem you want to solve but what does it help if we lock the repos for writes only? I guess the sqlitedb will be very unhappy if you replace the file is is working on even in read only mode. We'd be better off by using filenames containing the SHA1 for the dbs and an atomic replace for the repomd file.

As is is very unlikely that two applications will work on two completely separate sets of repos it would make more sense to have a global lock that can be acquired multiple times for reads and only once for writes (also counting the read locks)

Florian
_______________________________________________
Yum-devel mailing list
Yum-devel@linux.duke.edu
https://lists.dulug.duke.edu/mailman/listinfo/yum-devel

Reply via email to