Hi Andy,
it seems that I resolved the problem using a different (and more simple)
configuration for Joseki and SDB.
The trick seems to be to have only one sdb:DatasetStore (sdb complete) without
specifying a sdb-part with any named-graph declared and mapped on SDB.
I attached the new config file more detail.
Thanx again for support,
Matteo.
-------- Messaggio originale --------
Oggetto: Re: SPARQL: Joseki and "Already in transaction" error for
concurrent query context
Data: Fri, 13 Jan 2012 10:34:42 +0100
Mittente: Matteo Busanelli (GMail) <[email protected]>
A: [email protected]
Hi Andy,
first of all thanx for the quick response and support.
I've tried your suggested solution but nothing happened.
I've also tried from two tab of browser running the queries (both on the same
graph or on two different graphs) and the second made query recieve:
"HTTP ERROR 500
Problem accessing /sparql. Reason:
INTERNAL_SERVER_ERROR"
...until the first made query doesn't terminate returning "RESPONSE /sparql
200".
It seems that Joseki doesen't handle the "Already in transaction" exception of
com.hp.hpl.jena.sdb.graph.TransactionHandlerSDB.
...
2012-01-13 10:31:32 - TransactionHandlerSDB :: beginTransaction: Already in
a transaction
2012-01-13 10:31:32 - Servlet :: Internal server error
com.hp.hpl.jena.sdb.SDBException: Already in transaction
at
com.hp.hpl.jena.sdb.graph.TransactionHandlerSDB.begin(TransactionHandlerSDB.java:45)
...
I would have expected that Joseki handled this exception at list creating and managing a sort of waiting queue of queries (for example for the Mutex
locking policy).
Maybe was a problem the fact that I run Joseki from command line (using bundled
Jetty)?
Could you eventually be able to replicate the error?
I've attached the config file with the solutions that you suggested so you can
see if I've understood what u have said.
(I also tried to create different store and different connection for each graph
but nothing...)
Thanx again for the support,
Matteo.
Il 12/01/2012 20:59, Andy Seaborne ha scritto:
Matteo,
What I think is the problem is that you are using a general dataset for service <sparql> but it has several ways to get to the same SDB database
client instance <#sdb_busa> via the different graphs in <#sdb-part>.
It should work if you create different instances of the rdf:type sdb:DatasetStore, one for each named graph (I haven't tried). Each one is a JDBC
connection.
If they are all the same you get the situation of calling a new transaction for the various graphs inside one query but with JDBC you only have
one transaction at a time.
Do you really need the dataset structured like that?
Maybe TDB and dynamic datasets would work better for you.
Andy
On 12/01/12 17:01, Matteo Busanelli (GMail) wrote:
This is my configuration file (attached).
Il 12/01/2012 17:58, Andy Seaborne ha scritto:
On 12/01/12 09:39, Matteo Busanelli (GMail) wrote:
Hi everyone,
I'm tryng to use Joseki with SDB (on MySQL 5.1) for serving multithread
application queries.
If all queries doesen't overlapp everything works well.
The problem comes wehn two different thread make concurrent queries that
overlaps and I got this error:
======================= log Stacktrace =====================
...
As you can see here I have the first REQUEST and the second one that
come before RESPONSE of the first one. As a result The first ends
correctly (RESPONSE /sparql 200) whereas the second one recieve a ::
RESPONSE /sparql 500 (so that I get an Http 500 error: Internal Server
Error:... from Joseki).
I already tried to configure "joseki:lockingPolicy" parameter but
whatever value I specify (joseki:lockingPolicyMutex,
joseki:lockingPolicyMRSW,joseki:lockingPolicyNone ) the result doesn't
change. I also have tried to increase the the "joseki:poolSize"
parameter but nothing.
What can I do to let Joseki manage correctly the concurrent queries
trough SDB?
I'm working with:
- Joseki 3.4.4
- SDB-1.3.4
- Jena-2.6.4
- MySql 5.1
If it may be useful I can attach the joseki config file used.
It would be useful to see it.
Andy
Thanx in advance,
Matteo.
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix module: <http://joseki.org/2003/06/module#> .
@prefix joseki: <http://joseki.org/2005/06/configuration#> .
@prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
@prefix sdb: <http://jena.hpl.hp.com/2007/sdb#> .
@prefix imo: <http://www.imolinfo.it/ontologie/> .
@prefix d2rq: <http://www.wiwiss.fu-berlin.de/suhl/bizer/D2RQ/0.1#> .
<> rdfs:label "Joseki Configuration File - SDB busa" .
## Stripped down to support one service that exposes an
## SDB store as a SPARQL endpoint for query and one for
## one of the graphs in an SDB store.
[] rdf:type joseki:Server .
## --------------------------------------------------------------
## Services
## Service publishes the whole of the SDB store - this is the usual way to use
SDB.
<#service1>
rdf:type joseki:Service ;
rdfs:label "SPARQL-SDB" ;
joseki:serviceRef "sparql" ;
joseki:dataset <#sdb> ;
joseki:processor joseki:ProcessorSPARQL_FixedDS .
## SPARQL/Update.
## Creation of named graph from web requests is not supported.
<#serviceUpdate>
rdf:type joseki:Service ;
rdfs:label "SPARQL/Update" ;
joseki:serviceRef "update/service" ;
joseki:dataset <#sdb-part>;
joseki:processor joseki:ProcessorSPARQLUpdate .
## --------------------------------------------------------------
## Datasets
## See also SDB documentation -- http://jena.hpl.hp.com/wiki/SDB
## Special declarations to cause SDB to be used.
## Initialize SDB.
## Tell the system that sdb:DatasetStore is an implementation of ja:RDFDataset .
## Tell the system that sdb:Model is an implementation of ja:RDFDataset .
[] ja:loadClass "com.hp.hpl.jena.sdb.SDB" .
sdb:DatasetStore rdfs:subClassOf ja:RDFDataset .
sdb:Model rdfs:subClassOf ja:Model .
##DEFINE A DATASETSTORE FOR EACH NAMEDGRAPH
##Tring a solution for cuncurrency problems (SDB Already in transaction)
#######################################################################
<#sdb> rdf:type sdb:DatasetStore ;
joseki:poolSize 5 ;
sdb:store <#store> .
<#store> rdf:type sdb:Store ;
sdb:layout "layout2" ;
sdb:connection <#conn> ;
sdb:engine "InnoDB" ; # MySQL specific
.
<#conn> rdf:type sdb:SDBConnection ;
sdb:sdbType "MySQL" ; # Needed for JDBC URL
sdb:sdbHost "localhost" ;
sdb:sdbName "sdb" ;
sdb:sdbUser "sdb" ;
sdb:sdbPassword "password" ;
sdb:driver "com.mysql.jdbc.Driver" ;
sdb:jdbcURL
"jdbc:mysql://localhost/sdb?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true";
.
## D2R MODEL GRAPH
[] ja:imports d2rq: .
<#d2r-test-graph> a d2rq:D2RQModel;
d2rq:mappingFile <file:./TopicICTTaxonomy_mapping.n3>;
d2rq:resourceBaseURI
<http://www.imolinfo.it/ontologie/internal/v1.0/topic-ict-model#>;
.
## --------------------------------------------------------------
## Processors
## --------------------------------------------------------------
joseki:ProcessorSPARQL_FixedDS
rdfs:label "SPARQL processor for fixed datasets" ;
rdf:type joseki:Processor ;
module:implementation
[ rdf:type joseki:ServiceImpl ;
module:className <java:org.joseki.processors.SPARQL>
] ;
joseki:allowExplicitDataset "false"^^xsd:boolean ;
joseki:allowWebLoading "false"^^xsd:boolean ;
## The database is safe for MRSW (multiple-reader, single-writer).
joseki:lockingPolicy joseki:lockingPolicyMRSW ;
## joseki:lockingPolicy joseki:lockingPolicyMutex ;
## joseki:lockingPolicy joseki:lockingPolicyNone ;
.
joseki:ProcessorSPARQLUpdate
rdfs:label "SPARQL Update processor" ;
rdf:type joseki:Processor ;
module:implementation
[ rdf:type joseki:ServiceImpl ;
module:className <java:org.joseki.processors.SPARQLUpdate>
] ;
joseki:lockingPolicy joseki:lockingPolicyMRSW ;
.
# Local Variables:
# tab-width: 4
# indent-tabs-mode: nil
# End: