Ok, I see your point. On the other hand, I tested it without LockableIterator and iterating was 5-10% faster. That is very expensive to prevent a unlikely problem.
Regards, Manuel On Thu, Nov 11, 2010 at 11:52 AM, Reto Bachmann-Gmuer < [email protected]> wrote: > Hi Manuel > > It would be consistent to throw the ConcurrentModificationException in > the LocakbleMGraphWrapper if a write operation took place since the > last call to iterator.read, we currently leave it to the underlying > implementation to do this. So you're in so far as the current wrapping > does not do what one might expect in so far it doesn't guarantee all > read will be successful. However, and that I think is the legitimation > for those locks in the iterator, it guarantees that no read happens > during a write operation, I think even though it's unlikely it is not > to be excluded that such a read could compromise the write, so I think > the current implementation is sufficient and necessary to guarantee no > data corruption due to concurrent access (the assumption is that write > between calls of iterator.next() can corrupt the iterator but not the > graph). > > Cheers, > Reto > > > On Wed, Nov 10, 2010 at 11:57 AM, Manuel Innerhofer <[email protected]> > wrote: > > Hi Reto, > > > > I had a closer look at filter() and iterator() of LockableMGraphWrapper. > I > > seems to > > me that the readlocks done in these methods and the LockableIterator are > > mostly > > unnecessary and performance impairing. The LockableIterator locks the > graph > > for every call of next() and hasNext(), but in between calls the graph is > > not read-locked, > > therefore a write operation can occur. > > Because of this a caller of one of these methods has to read-lock the > graph > > while > > iteratating anyway. I propose to no longer use LockableIterator in > filter() > > and iterator(). > > What do you think? > > > > Regards, > > Manuel > > >
