Hi all,

----- Original Message ----

> From: Yves Raimond <[email protected]>
> To: [email protected]
> Sent: Thursday, 18 June, 2009 9:57:51 PM
> Subject: [Virtuoso-devel] Deadlocking and slow queries

> Another major issue we're running into is the deadlocking mechanism.
> We have a constant flow of updates going in through SPARQL/Update. Our
> dataset is a collection of fairly small graphs (around 30 triples
> each). When we do a query like the above, going through all these
> graphs, we're almost sure to reach a deadlock at some point. At almost
> any point in time, there is an update going on in one of the graphs.

I have been having this issue too as I am fairly constantly running SPARQL 
INSERT's on a single graph and intermittently asking the graph for results. It 
would be nice to have a consistent solution if possible even if the read-only 
query is postponed for a second to allow the concurrent insert to run its 
course. None of the inserts are very large (30 or so triples each), so it is a 
little strange that the two queries needed to interfere with each other. I have 
five indexes on the RDF_QUAD table btw if that interferes with things. 
(RDF_QUAD_GPOS RDF_QUAD_OGPS RDF_QUAD_OPGS RDF_QUAD_POGS  RDF_QUAD_SPOG)

The whole database actually silently locked up at one point and stopped 
INSERT's working at all when I was experimenting with CLEAR'ing the graph while 
INSERT's were happening, but I restarted it and haven't tried the same thing 
again so I can't say whether it was a consistent bug. You might be able to test 
some concurrent CLEAR GRAPH queries along with consistent INSERT's (every 
second or so)and see if the lockup happens in a test scenario.

The inserts are some experimental statistics gathering based on Bio2RDF queries 
that I thought would be cool and I want to periodically check the progress 
without stopping the statistics gathering process.

Cheers,

Peter



      Access Yahoo!7 Mail on your mobile. Anytime. Anywhere.
Show me how: http://au.mobile.yahoo.com/mail

Reply via email to