I think for such a scenario you'd need read locks which gets upgraded to
write locks when modifying. Consider this simple scenario w/o read locks:

Thread ONE: get relationship R
Thread TWO: get relationship R
Thread TWO: delete relationship R and commit transaction
Thread ONE: do any modification on relationship R... BOOM Exception

If read locks were to be taken when getting relationships:

Thread ONE: get relationship R
Thread TWO: get relationship R
Thread TWO: would like to delete relationship R, but will have to wait
until Thread ONE releases its read lock on it
Thread ONE: do some modification on relationship R and commit transaction
Thread TWO: lock from Thread ONE was released so delete relationship R and
commit transaction

Unfortunately you cannot provide different isolation levels in neo4j at the
moment, but you could mimic that behavior yourself using
LockManager/LockReleaser (from graphDb.getConfig())

2011/11/21 Aseem Kishore <aseem.kish...@gmail.com>

> Hey guys,
>
> If we put our app under a bit of load, creating and removing nodes and
> relationships concurrently, sometimes we get back a 500 Internal Server
> Error from the REST API when we do a traverse or Cypher query. Here's an
> example stack trace:
>
> https://gist.github.com/1381423
>
> We're running Neo4j. 1.4 still, but does this stack trace provide any
> insight/ideas into what might be wrong, what we could do as a workaround,
> etc.? Can I provide any other info that would help?
>
> It goes without saying that our assumption is that Neo4j shouldn't be
> failing/crashing on queries; our expectation was that operations are
> transactional, etc.
>
> Thanks!
>
> Aseem
> _______________________________________________
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to