Re: Fuseki errors with concurrent requests

2018-08-01 Thread Mikael Pesonen



Hi,

is it somehow possible to get this error (TransactionManager ERROR There 
are now active transactions ) from Fuseki GSP so that client could sleep 
and retry later allowing db to finish the transaction?




On 19.6.2018 23:56, Andy Seaborne wrote:



On 18/06/18 11:36, Mikael Pesonen wrote:



On 18.6.2018 13:30, Andy Seaborne wrote:



On 18/06/18 10:55, Mikael Pesonen wrote:


Hi Andy,

On 18.6.2018 12:05, Andy Seaborne wrote:



>>> errors occur less than before.

which errors?


   TransactionManager ERROR There are now active transactions

This occurs when Im inserting new data and trying to read it right 
away (get empty result), or deleting data and trying to read it to 
make sure it's deleted (get data that should be deleted). There are 
no other errors in the log.


From looking at the code, I can't see how that happens - is it now 
possible to provide a reproducible script so that I can run it on 
machine?


    Andy



--
Lingsoft - 30 years of Leading Language Management

www.lingsoft.fi

Speech Applications - Language Management - Translation - Reader's and Writer's 
Tools - Text Tools - E-books and M-books

Mikael Pesonen
System Engineer

e-mail: mikael.peso...@lingsoft.fi
Tel. +358 2 279 3300

Time zone: GMT+2

Helsinki Office
Eteläranta 10
FI-00130 Helsinki
FINLAND

Turku Office
Kauppiaskatu 5 A
FI-20100 Turku
FINLAND



Re: Fuseki errors with concurrent requests

2018-06-20 Thread Mikael Pesonen



Hi,

On 19.6.2018 23:56, Andy Seaborne wrote:

>>> errors occur less than before.

which errors?


   TransactionManager ERROR There are now active transactions

This occurs when Im inserting new data and trying to read it right 
away (get empty result), or deleting data and trying to read it to 
make sure it's deleted (get data that should be deleted). There are 
no other errors in the log.


From looking at the code, I can't see how that happens - is it now 
possible to provide a reproducible script so that I can run it on 
machine?


unfortunately it's the same problem as before, we are testing with our 
REST API product which if quite a bit of code and and I dont have time 
to build a similar test case from scratch.


Br



    Andy



--
Lingsoft - 30 years of Leading Language Management

www.lingsoft.fi

Speech Applications - Language Management - Translation - Reader's and Writer's 
Tools - Text Tools - E-books and M-books

Mikael Pesonen
System Engineer

e-mail: mikael.peso...@lingsoft.fi
Tel. +358 2 279 3300

Time zone: GMT+2

Helsinki Office
Eteläranta 10
FI-00130 Helsinki
FINLAND

Turku Office
Kauppiaskatu 5 A
FI-20100 Turku
FINLAND



Re: Fuseki errors with concurrent requests

2018-06-19 Thread Andy Seaborne




On 18/06/18 11:36, Mikael Pesonen wrote:



On 18.6.2018 13:30, Andy Seaborne wrote:



On 18/06/18 10:55, Mikael Pesonen wrote:


Hi Andy,

On 18.6.2018 12:05, Andy Seaborne wrote:



>>> errors occur less than before.

which errors?


   TransactionManager ERROR There are now active transactions

This occurs when Im inserting new data and trying to read it right away 
(get empty result), or deleting data and trying to read it to make sure 
it's deleted (get data that should be deleted). There are no other 
errors in the log.


From looking at the code, I can't see how that happens - is it now 
possible to provide a reproducible script so that I can run it on machine?


Andy



Re: Fuseki errors with concurrent requests

2018-06-18 Thread Mikael Pesonen




On 18.6.2018 13:30, Andy Seaborne wrote:



On 18/06/18 10:55, Mikael Pesonen wrote:


Hi Andy,

On 18.6.2018 12:05, Andy Seaborne wrote:



On 15/06/18 15:36, Mikael Pesonen wrote:


Hi,

unfortunately I haven't been able to make a standalone package yet. 
However with jena 3.7.0 things are bit better now - errors occur 
less than before.


"The errors" are which exactly? the thread doesn't make it clear.

Earlier was getting these (were mentioned in this mail thread)

org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write


[2018-01-24 17:16:53] BindingTDB ERROR get1(?o)
org.apache.jena.tdb.base.file.FileException: 
ObjectFileStorage.read[nodes](491421708)[filesize=495059272][file.size()=495059272]: 
Failed to read the length : got 0 bytes
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:341) 



and these have stopped with 3.7.0?

Havent seen them so far.


>>> errors occur less than before.

which errors?


  TransactionManager ERROR There are now active transactions

This occurs when Im inserting new data and trying to read it right away 
(get empty result), or deleting data and trying to read it to make sure 
it's deleted (get data that should be deleted). There are no other 
errors in the log.


Is this a database built with 3.7.0 (only)?
Yes, made a dump and imported it with tdbloader2 into empty jena data 
folder.




    Andy





Actually now I'm not getting error codes from Fuseki (all 200), but 
errors still occur and at that time there is Fuseki output


  TransactionManager ERROR There are now active transactions


The only code paths that lead to this I can find are either inside 
an exclusive lock or StoreConnection.expel, which is a somewhat 
drastic thing to do and the app needs to make sure it is quiet first.


This occurs when Im inserting new data and trying to read it right 
away (get empty result), or deleting data and trying to read it to 
make sure it's deleted (get data that should be deleted). There are 
no other errors in the log.


When an HTTP request returns the transaction should be committed and 
visible. Calling in parallel



Remind me again - what's your setup? OS, hardware and Fuseki 
configuration.

-Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-149-generic x86_64)
- Virtual server, 8Gt mem, Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz

-Fuseki 3.7.0 is run from cmd line

  /usr/bin/java -Dlog4j.debug 
-Dlog4j.configuration=file:xxx/apache-jena-fuseki-3.7.0/log4j.properties 
-Xmx5600M -jar fuseki-server.jar --update --port 3030 
--loc=xxx/jena_data/ /ds


-No config file

-All calls are done using http/GSP.



Is this because things are happening too fast, in kind of 
artificial bombing of the service? So just slowing things down a 
bit would solve this?


It is worth trying to see what the effect is, though it should work.

    Andy



Br


On 30.1.2018 15:00, Andy Seaborne wrote:
Could you please turn this in to a standalone complete, minimal 
example?  AKA something with all the details that can be run by 
someone else including how the server is being run, what disk 
storage you are using, and whether the database starts from fresh 
or not.


Does it happen on earlier versions of Fuseki?

    Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically 
simultaneous calls to Apahe web server which loads php which 
calls Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW 
(multiple reader or single writer), that queuing of updates 
should be going on on the server-side.


I haven't had time to look at this issue, and it's difficult to 
say much without a reproducible phenomenon. Do you either of 
y'all have test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw 
repeated occurrences of this the other day while running some 
queries from the fuseki browser app and with a database load 
going on with our own app using:


DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, 
final Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
loadDatasetMutex(cur

Re: Fuseki errors with concurrent requests

2018-06-18 Thread Andy Seaborne




On 18/06/18 10:55, Mikael Pesonen wrote:


Hi Andy,

On 18.6.2018 12:05, Andy Seaborne wrote:



On 15/06/18 15:36, Mikael Pesonen wrote:


Hi,

unfortunately I haven't been able to make a standalone package yet. 
However with jena 3.7.0 things are bit better now - errors occur less 
than before.


"The errors" are which exactly? the thread doesn't make it clear.

Earlier was getting these (were mentioned in this mail thread)

org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write


[2018-01-24 17:16:53] BindingTDB ERROR get1(?o)
org.apache.jena.tdb.base.file.FileException: 
ObjectFileStorage.read[nodes](491421708)[filesize=495059272][file.size()=495059272]: 
Failed to read the length : got 0 bytes
     at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:341) 



and these have stopped with 3.7.0?

>>> errors occur less than before.

which errors?

Is this a database built with 3.7.0 (only)?

Andy





Actually now I'm not getting error codes from Fuseki (all 200), but 
errors still occur and at that time there is Fuseki output


  TransactionManager ERROR There are now active transactions


The only code paths that lead to this I can find are either inside an 
exclusive lock or StoreConnection.expel, which is a somewhat drastic 
thing to do and the app needs to make sure it is quiet first.


This occurs when Im inserting new data and trying to read it right 
away (get empty result), or deleting data and trying to read it to 
make sure it's deleted (get data that should be deleted). There are 
no other errors in the log.


When an HTTP request returns the transaction should be committed and 
visible. Calling in parallel



Remind me again - what's your setup? OS, hardware and Fuseki 
configuration.

-Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-149-generic x86_64)
- Virtual server, 8Gt mem, Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz

-Fuseki 3.7.0 is run from cmd line

  /usr/bin/java -Dlog4j.debug 
-Dlog4j.configuration=file:xxx/apache-jena-fuseki-3.7.0/log4j.properties 
-Xmx5600M -jar fuseki-server.jar --update --port 3030 
--loc=xxx/jena_data/ /ds


-No config file

-All calls are done using http/GSP.



Is this because things are happening too fast, in kind of artificial 
bombing of the service? So just slowing things down a bit would solve 
this?


It is worth trying to see what the effect is, though it should work.

    Andy



Br


On 30.1.2018 15:00, Andy Seaborne wrote:
Could you please turn this in to a standalone complete, minimal 
example?  AKA something with all the details that can be run by 
someone else including how the server is being run, what disk 
storage you are using, and whether the database starts from fresh or 
not.


Does it happen on earlier versions of Fuseki?

    Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically 
simultaneous calls to Apahe web server which loads php which calls 
Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW (multiple 
reader or single writer), that queuing of updates should be going 
on on the server-side.


I haven't had time to look at this issue, and it's difficult to 
say much without a reproducible phenomenon. Do you either of y'all 
have test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw 
repeated occurrences of this the other day while running some 
queries from the fuseki browser app and with a database load 
going on with our own app using:


DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, 
final Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some 
queries from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO

Re: Fuseki errors with concurrent requests

2018-06-18 Thread Mikael Pesonen



Hi Andy,

On 18.6.2018 12:05, Andy Seaborne wrote:



On 15/06/18 15:36, Mikael Pesonen wrote:


Hi,

unfortunately I haven't been able to make a standalone package yet. 
However with jena 3.7.0 things are bit better now - errors occur less 
than before.


"The errors" are which exactly? the thread doesn't make it clear.

Earlier was getting these (were mentioned in this mail thread)

org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write

[2018-01-24 17:16:53] BindingTDB ERROR get1(?o)
org.apache.jena.tdb.base.file.FileException: 
ObjectFileStorage.read[nodes](491421708)[filesize=495059272][file.size()=495059272]:
 Failed to read the length : got 0 bytes
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:341)



Actually now I'm not getting error codes from Fuseki (all 200), but 
errors still occur and at that time there is Fuseki output


  TransactionManager ERROR There are now active transactions


The only code paths that lead to this I can find are either inside an 
exclusive lock or StoreConnection.expel, which is a somewhat drastic 
thing to do and the app needs to make sure it is quiet first.


This occurs when Im inserting new data and trying to read it right 
away (get empty result), or deleting data and trying to read it to 
make sure it's deleted (get data that should be deleted). There are 
no other errors in the log.


When an HTTP request returns the transaction should be committed and 
visible. Calling in parallel



Remind me again - what's your setup? OS, hardware and Fuseki 
configuration.

-Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-149-generic x86_64)
- Virtual server, 8Gt mem, Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz

-Fuseki 3.7.0 is run from cmd line

 /usr/bin/java -Dlog4j.debug 
-Dlog4j.configuration=file:xxx/apache-jena-fuseki-3.7.0/log4j.properties 
-Xmx5600M -jar fuseki-server.jar --update --port 3030 
--loc=xxx/jena_data/ /ds


-No config file

-All calls are done using http/GSP.



Is this because things are happening too fast, in kind of artificial 
bombing of the service? So just slowing things down a bit would solve 
this?


It is worth trying to see what the effect is, though it should work.

    Andy



Br


On 30.1.2018 15:00, Andy Seaborne wrote:
Could you please turn this in to a standalone complete, minimal 
example?  AKA something with all the details that can be run by 
someone else including how the server is being run, what disk 
storage you are using, and whether the database starts from fresh or 
not.


Does it happen on earlier versions of Fuseki?

    Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically 
simultaneous calls to Apahe web server which loads php which calls 
Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW (multiple 
reader or single writer), that queuing of updates should be going 
on on the server-side.


I haven't had time to look at this issue, and it's difficult to 
say much without a reproducible phenomenon. Do you either of y'all 
have test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw 
repeated occurrences of this the other day while running some 
queries from the fuseki browser app and with a database load 
going on with our own app using:


DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, 
final Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some 
queries from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of 
an alloc-write
at 
org.apa

Re: Fuseki errors with concurrent requests

2018-06-18 Thread Andy Seaborne




On 15/06/18 15:36, Mikael Pesonen wrote:


Hi,

unfortunately I haven't been able to make a standalone package yet. 
However with jena 3.7.0 things are bit better now - errors occur less 
than before.


"The errors" are which exactly? the thread doesn't make it clear.

Actually now I'm not getting error codes from Fuseki (all 200), but 
errors still occur and at that time there is Fuseki output


  TransactionManager ERROR There are now active transactions


The only code paths that lead to this I can find are either inside an 
exclusive lock or StoreConnection.expel, which is a somewhat drastic 
thing to do and the app needs to make sure it is quiet first.


This occurs when Im inserting new data and trying to read it right away 
(get empty result), or deleting data and trying to read it to make sure 
it's deleted (get data that should be deleted). There are no other 
errors in the log.


When an HTTP request returns the transaction should be committed and 
visible. Calling in parallel



Remind me again - what's your setup? OS, hardware and Fuseki configuration.

Is this because things are happening too fast, in kind of artificial 
bombing of the service? So just slowing things down a bit would solve this?


It is worth trying to see what the effect is, though it should work.

Andy



Br


On 30.1.2018 15:00, Andy Seaborne wrote:
Could you please turn this in to a standalone complete, minimal 
example?  AKA something with all the details that can be run by 
someone else including how the server is being run, what disk storage 
you are using, and whether the database starts from fresh or not.


Does it happen on earlier versions of Fuseki?

    Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically 
simultaneous calls to Apahe web server which loads php which calls 
Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW (multiple 
reader or single writer), that queuing of updates should be going on 
on the server-side.


I haven't had time to look at this issue, and it's difficult to say 
much without a reproducible phenomenon. Do you either of y'all have 
test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from 
the fuseki browser app and with a database load going on with our 
own app using:


 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final 
Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 


at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50) 

at 
org.apache.jena.tdb.store.n

Re: Fuseki errors with concurrent requests

2018-06-15 Thread Mikael Pesonen



Hi,

unfortunately I haven't been able to make a standalone package yet. 
However with jena 3.7.0 things are bit better now - errors occur less 
than before.


Actually now I'm not getting error codes from Fuseki (all 200), but 
errors still occur and at that time there is Fuseki output


 TransactionManager ERROR There are now active transactions

This occurs when Im inserting new data and trying to read it right away 
(get empty result), or deleting data and trying to read it to make sure 
it's deleted (get data that should be deleted). There are no other 
errors in the log.


Is this because things are happening too fast, in kind of artificial 
bombing of the service? So just slowing things down a bit would solve this?


Br


On 30.1.2018 15:00, Andy Seaborne wrote:
Could you please turn this in to a standalone complete, minimal 
example?  AKA something with all the details that can be run by 
someone else including how the server is being run, what disk storage 
you are using, and whether the database starts from fresh or not.


Does it happen on earlier versions of Fuseki?

    Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically 
simultaneous calls to Apahe web server which loads php which calls 
Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW (multiple 
reader or single writer), that queuing of updates should be going on 
on the server-side.


I haven't had time to look at this issue, and it's difficult to say 
much without a reproducible phenomenon. Do you either of y'all have 
test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from 
the fuseki browser app and with a database load going on with our 
own app using:


 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final 
Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 


at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67) 

at 
org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121) 


at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76) 

at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.ja

Re: Fuseki errors with concurrent requests

2018-03-07 Thread Andy Seaborne



On 07/03/18 12:44, Mikael Pesonen wrote:


So can we make backup quads, empty database and insert quads back, and 
then we have a noncorrupted database?


It can't be guaranteed.

If the quad dump is valid RDF, the new database will not be corrupted.

But a dump of a broken database may be incomplete in some way.  It is 
unusual but possible.


Andy




On 5.3.2018 16:41, ajs6f wrote:
To my knowledge (Andy of course is the TDB expert) you can't really 
rebuild a TDB instance from a corrupted TDB instance. You should start 
with a known-good backup or original RDF files.


ajs6f

On Mar 5, 2018, at 9:32 AM, Mikael Pesonen 
 wrote:



Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What 
would be preferred way to rebuild data?


Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
    33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
    25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
    6771 Jan 31 12:06 prefixes.dat
    25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
    33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
    4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get 
some level isolation.


What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:
On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from 
the fuseki browser app and with a database load going on with our 
own app using:


  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final 
Model m) {

  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 


 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67) 

 at 
org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121) 


 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.j

Re: Fuseki errors with concurrent requests

2018-03-07 Thread Mikael Pesonen


So can we make backup quads, empty database and insert quads back, and 
then we have a noncorrupted database?



On 5.3.2018 16:41, ajs6f wrote:

To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f


On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What would be 
preferred way to rebuild data?

Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
6771 Jan 31 12:06 prefixes.dat
25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some level 
isolation.

What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final Model m) {
  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
 at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
 at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
 at org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
 at org.apache.jena.sparql.expr.ExprFunct

Re: Fuseki errors with concurrent requests

2018-03-06 Thread Mikael Pesonen


Thanks for the tip. Out test has so much logic, testing of search 
results, comparing stuff etc, so we decided to build it with PHP.


But from Jmeter got an idea, is it possible to save all calls to Jena 
(with content, headers) so that it's possible to rerun the test without 
our environment?



On 6.3.2018 12:32, Martynas Jusevičius wrote:

Maybe you can make a reproducible using JMeter or such.

On Tue, Mar 6, 2018 at 11:24 AM, Mikael Pesonen 
wrote:


Yes, clean install of Ubuntu, Jena etc.




On 5.3.2018 17:40, Andy Seaborne wrote:



On 05/03/18 15:04, Mikael Pesonen wrote:


We are using GSP and our test script is doing ~20 json-ld inserts and
sparql updates in a row ASAP, and we are running 10 test scripts
concurrently. This test is failing now.


Starting with an empty database?



On 5.3.2018 16:51, ajs6f wrote:


"fairly high load and concurrent usage"

This is not a very precise or reproducible measure.

Many sites use Jena in production at all kinds of scales for all kinds
of dimensions, including HA setups. If you can explain more about your
specific situation, you will get more useful advice.

ajs6f

On Mar 5, 2018, at 9:45 AM, Mikael Pesonen 

wrote:


To be clear: can Jena be recommended for production database in our
customer cases for fairly high load and concurrent usage? Or is it mainly
for scientific purposes?

Br

On 5.3.2018 16:41, ajs6f wrote:


To my knowledge (Andy of course is the TDB expert) you can't really
rebuild a TDB instance from a corrupted TDB instance. You should start with
a known-good backup or original RDF files.

ajs6f

On Mar 5, 2018, at 9:32 AM, Mikael Pesonen <

mikael.peso...@lingsoft.fi> wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What
would be preferred way to rebuild data?

Data folder:

   5226102784 Mar  5 12:48 GOSP.dat
260046848 Mar  5 12:48 GOSP.idn
   5377097728 Mar  5 12:48 GPOS.dat
268435456 Mar  5 12:48 GPOS.idn
   5486149632 Mar  5 12:48 GSPO.dat
285212672 Mar  5 12:48 GSPO.idn
0 Mar  5 12:48 journal.jrnl
545259520 Mar  5 12:38 node2id.dat
150994944 Feb 20 16:32 node2id.idn
497658012 Mar  5 12:38 nodes.dat
1 Nov 14 15:27 none.opt
 33554432 Jan 24 17:06 OSP.dat
   4848615424 Mar  5 12:48 OSPG.dat
293601280 Mar  1 12:46 OSPG.idn
  8388608 Jan 24 16:59 OSP.idn
 25165824 Jan 24 17:06 POS.dat
   4966055936 Mar  5 12:48 POSG.dat
276824064 Mar  5 12:38 POSG.idn
  8388608 Jan 24 16:55 POS.idn
  8388608 Jan 31 12:06 prefix2id.dat
  8388608 Mar 15  2016 prefix2id.idn
 6771 Jan 31 12:06 prefixes.dat
 25165824 Jan 31 12:06 prefixIdx.dat
  8388608 Jan  8 13:19 prefixIdx.idn
 33554432 Jan 24 17:06 SPO.dat
   5075107840 Mar  5 12:48 SPOG.dat
369098752 Mar  5 12:48 SPOG.idn
  8388608 Jan 24 17:04 SPO.idn
 4069 Nov  7 16:38 _stats.opt
4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:


These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get
some level isolation.

What's the Fuseki config in this case?

  Andy

On 24/01/18 23:40, Chris Tomlinson wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw repeated
occurrences of this the other day while running some queries from the
fuseki browser app and with a database load going on with our own app using:

DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

   DatasetAccessor putModel(graphName, m);

and for following models:

   static void addToTransferBulk(final String graphName, final
Model m) {
   if (currentDataset == null)
   currentDataset = DatasetFactory.createGeneral();
   currentDataset.addNamedModel(graphName, m);
   triplesInDataset += m.size();
   if (triplesInDataset > initialLoadBulkSize) {
   try {
   loadDatasetMutex(currentDataset);
   currentDataset = null;
   triplesInDataset = 0;
   } catch (TimeoutException e) {
   e.printStackTrace();
   return;
   }
   }
   }

as I say the exceptions appeared while I was running some queries
from from the fuseki browser app:

[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)

[2018-01-22 16:25:03] Fuseki INFO  [477] POST
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an
alloc-write
  at org.apache.jena.tdb.base.objec
tfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
  at org.apache.jena.tdb.base.objec
tfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
  at org.apache.jena.tdb.lib.NodeLi
b.fetch

Re: Fuseki errors with concurrent requests

2018-03-06 Thread Martynas Jusevičius
Maybe you can make a reproducible using JMeter or such.

On Tue, Mar 6, 2018 at 11:24 AM, Mikael Pesonen 
wrote:

>
> Yes, clean install of Ubuntu, Jena etc.
>
>
>
>
> On 5.3.2018 17:40, Andy Seaborne wrote:
>
>>
>>
>> On 05/03/18 15:04, Mikael Pesonen wrote:
>>
>>>
>>> We are using GSP and our test script is doing ~20 json-ld inserts and
>>> sparql updates in a row ASAP, and we are running 10 test scripts
>>> concurrently. This test is failing now.
>>>
>>
>> Starting with an empty database?
>>
>>
>>>
>>> On 5.3.2018 16:51, ajs6f wrote:
>>>
 "fairly high load and concurrent usage"

 This is not a very precise or reproducible measure.

 Many sites use Jena in production at all kinds of scales for all kinds
 of dimensions, including HA setups. If you can explain more about your
 specific situation, you will get more useful advice.

 ajs6f

 On Mar 5, 2018, at 9:45 AM, Mikael Pesonen 
> wrote:
>
>
> To be clear: can Jena be recommended for production database in our
> customer cases for fairly high load and concurrent usage? Or is it mainly
> for scientific purposes?
>
> Br
>
> On 5.3.2018 16:41, ajs6f wrote:
>
>> To my knowledge (Andy of course is the TDB expert) you can't really
>> rebuild a TDB instance from a corrupted TDB instance. You should start 
>> with
>> a known-good backup or original RDF files.
>>
>> ajs6f
>>
>> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen <
>>> mikael.peso...@lingsoft.fi> wrote:
>>>
>>>
>>> Still having these issues on all of our installations.
>>>
>>> I'm going to rule out corrupted database on our oldest server. What
>>> would be preferred way to rebuild data?
>>>
>>> Data folder:
>>>
>>>   5226102784 Mar  5 12:48 GOSP.dat
>>>260046848 Mar  5 12:48 GOSP.idn
>>>   5377097728 Mar  5 12:48 GPOS.dat
>>>268435456 Mar  5 12:48 GPOS.idn
>>>   5486149632 Mar  5 12:48 GSPO.dat
>>>285212672 Mar  5 12:48 GSPO.idn
>>>0 Mar  5 12:48 journal.jrnl
>>>545259520 Mar  5 12:38 node2id.dat
>>>150994944 Feb 20 16:32 node2id.idn
>>>497658012 Mar  5 12:38 nodes.dat
>>>1 Nov 14 15:27 none.opt
>>> 33554432 Jan 24 17:06 OSP.dat
>>>   4848615424 Mar  5 12:48 OSPG.dat
>>>293601280 Mar  1 12:46 OSPG.idn
>>>  8388608 Jan 24 16:59 OSP.idn
>>> 25165824 Jan 24 17:06 POS.dat
>>>   4966055936 Mar  5 12:48 POSG.dat
>>>276824064 Mar  5 12:38 POSG.idn
>>>  8388608 Jan 24 16:55 POS.idn
>>>  8388608 Jan 31 12:06 prefix2id.dat
>>>  8388608 Mar 15  2016 prefix2id.idn
>>> 6771 Jan 31 12:06 prefixes.dat
>>> 25165824 Jan 31 12:06 prefixIdx.dat
>>>  8388608 Jan  8 13:19 prefixIdx.idn
>>> 33554432 Jan 24 17:06 SPO.dat
>>>   5075107840 Mar  5 12:48 SPOG.dat
>>>369098752 Mar  5 12:48 SPOG.idn
>>>  8388608 Jan 24 17:04 SPO.idn
>>> 4069 Nov  7 16:38 _stats.opt
>>>4 Feb  6 12:01 tdb.lock
>>>
>>> On 30.1.2018 15:04, Andy Seaborne wrote:
>>>
 These seem to be different errors.

 "In the middle of an alloc-write" is possibly a concurrency issue.
 "Failed to read" is possibly a previous corrupted database

 This is a text dataset? That should be using an MRSW lock to get
 some level isolation.

 What's the Fuseki config in this case?

  Andy

 On 24/01/18 23:40, Chris Tomlinson wrote:

> On the latest 3.7.0-Snapshot (master branch) I also saw repeated
> occurrences of this the other day while running some queries from the
> fuseki browser app and with a database load going on with our own app 
> using:
>
> DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
>
>
> with for the first model to transfer:
>
>   DatasetAccessor putModel(graphName, m);
>
> and for following models:
>
>   static void addToTransferBulk(final String graphName, final
> Model m) {
>   if (currentDataset == null)
>   currentDataset = DatasetFactory.createGeneral();
>   currentDataset.addNamedModel(graphName, m);
>   triplesInDataset += m.size();
>   if (triplesInDataset > initialLoadBulkSize) {
>   try {
>   loadDatasetMutex(currentDataset);
>   currentDataset = null;
>   triplesInDataset = 0;
>   } catch (TimeoutException e) {
>   e.printStackTrace();
>   return;
>   }
>   }
>   }
>
> as I say the exc

Re: Fuseki errors with concurrent requests

2018-03-06 Thread Mikael Pesonen


Yes, clean install of Ubuntu, Jena etc.



On 5.3.2018 17:40, Andy Seaborne wrote:



On 05/03/18 15:04, Mikael Pesonen wrote:


We are using GSP and our test script is doing ~20 json-ld inserts and 
sparql updates in a row ASAP, and we are running 10 test scripts 
concurrently. This test is failing now.


Starting with an empty database?




On 5.3.2018 16:51, ajs6f wrote:

"fairly high load and concurrent usage"

This is not a very precise or reproducible measure.

Many sites use Jena in production at all kinds of scales for all 
kinds of dimensions, including HA setups. If you can explain more 
about your specific situation, you will get more useful advice.


ajs6f

On Mar 5, 2018, at 9:45 AM, Mikael Pesonen 
 wrote:



To be clear: can Jena be recommended for production database in our 
customer cases for fairly high load and concurrent usage? Or is it 
mainly for scientific purposes?


Br

On 5.3.2018 16:41, ajs6f wrote:
To my knowledge (Andy of course is the TDB expert) you can't 
really rebuild a TDB instance from a corrupted TDB instance. You 
should start with a known-good backup or original RDF files.


ajs6f

On Mar 5, 2018, at 9:32 AM, Mikael Pesonen 
 wrote:



Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. 
What would be preferred way to rebuild data?


Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
    33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
    25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
    6771 Jan 31 12:06 prefixes.dat
    25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
    33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
    4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get 
some level isolation.


What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:
On the latest 3.7.0-Snapshot (master branch) I also saw 
repeated occurrences of this the other day while running some 
queries from the fuseki browser app and with a database load 
going on with our own app using:


DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, 
final Model m) {

  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some 
queries from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of 
an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 

 at 
org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Andy Seaborne



On 05/03/18 15:04, Mikael Pesonen wrote:


We are using GSP and our test script is doing ~20 json-ld inserts and 
sparql updates in a row ASAP, and we are running 10 test scripts 
concurrently. This test is failing now.


Starting with an empty database?




On 5.3.2018 16:51, ajs6f wrote:

"fairly high load and concurrent usage"

This is not a very precise or reproducible measure.

Many sites use Jena in production at all kinds of scales for all kinds 
of dimensions, including HA setups. If you can explain more about your 
specific situation, you will get more useful advice.


ajs6f

On Mar 5, 2018, at 9:45 AM, Mikael Pesonen 
 wrote:



To be clear: can Jena be recommended for production database in our 
customer cases for fairly high load and concurrent usage? Or is it 
mainly for scientific purposes?


Br

On 5.3.2018 16:41, ajs6f wrote:
To my knowledge (Andy of course is the TDB expert) you can't really 
rebuild a TDB instance from a corrupted TDB instance. You should 
start with a known-good backup or original RDF files.


ajs6f

On Mar 5, 2018, at 9:32 AM, Mikael Pesonen 
 wrote:



Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What 
would be preferred way to rebuild data?


Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
    33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
    25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
    6771 Jan 31 12:06 prefixes.dat
    25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
    33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
    4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get 
some level isolation.


What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:
On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from 
the fuseki browser app and with a database load going on with our 
own app using:


  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final 
Model m) {

  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 

 at 
org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82) 

 at 
org.apache.jena.tdb.store.

Re: Fuseki errors with concurrent requests

2018-03-05 Thread ajs6f
From 
https://jena.apache.org/documentation/fuseki2/fuseki-server-protocol.html#backup
 :

> Backups are written to the server local directory 'backups' as 
> gzip-compressed N-Quads files.

You can load them just like any other NQuads files.

ajs6f

> On Mar 5, 2018, at 9:54 AM, Mikael Pesonen  wrote:
> 
> 
> I understood that content has to be rebuilt somehow, just copying files from 
> backup is not enough? And how do we know how old backup is needed, e.g. when 
> the corruption did happen?
> 
> 
> On 5.3.2018 16:50, ajs6f wrote:
>> I don't understand-- if you have known-good TDB backups available, why would 
>> you not start with them?
>> 
>> Dumping RDF files on the side is not a bad idea either, but TDB backups 
>> (such as are produced by the software itself) should be fine for most 
>> disaster-recovery purposes.
>> 
>> ajs6f
>> 
>>> On Mar 5, 2018, at 9:48 AM, Mikael Pesonen  
>>> wrote:
>>> 
>>> 
>>> Fortunately this is our development site. But backups are from TDP files, 
>>> we dont have a plan for dumping RDF exports for separate backup. So I guess 
>>> we should add that as next step?
>>> 
>>> On 5.3.2018 16:45, ajs6f wrote:
 Not a problem at all. Many sites use TDB as their main store. Just like 
 _any_ database, proper operation includes regular and frequent backups and 
 a plan for rebuilding the store independently of any instance.
 
 You _do_ have backups, right?
 
 ajs6f
 
> On Mar 5, 2018, at 9:43 AM, Mikael Pesonen  
> wrote:
> 
> 
> Hi,
> 
> we are using Jena TDP as THE database for document metadata. Data is fed 
> through our custom REST API. Is this something that would no be 
> recommended?
> 
> Br,
> 
> 
> On 5.3.2018 16:41, ajs6f wrote:
>> To my knowledge (Andy of course is the TDB expert) you can't really 
>> rebuild a TDB instance from a corrupted TDB instance. You should start 
>> with a known-good backup or original RDF files.
>> 
>> ajs6f
>> 
>>> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  
>>> wrote:
>>> 
>>> 
>>> Still having these issues on all of our installations.
>>> 
>>> I'm going to rule out corrupted database on our oldest server. What 
>>> would be preferred way to rebuild data?
>>> 
>>> Data folder:
>>> 
>>>  5226102784 Mar  5 12:48 GOSP.dat
>>>   260046848 Mar  5 12:48 GOSP.idn
>>>  5377097728 Mar  5 12:48 GPOS.dat
>>>   268435456 Mar  5 12:48 GPOS.idn
>>>  5486149632 Mar  5 12:48 GSPO.dat
>>>   285212672 Mar  5 12:48 GSPO.idn
>>>   0 Mar  5 12:48 journal.jrnl
>>>   545259520 Mar  5 12:38 node2id.dat
>>>   150994944 Feb 20 16:32 node2id.idn
>>>   497658012 Mar  5 12:38 nodes.dat
>>>   1 Nov 14 15:27 none.opt
>>>33554432 Jan 24 17:06 OSP.dat
>>>  4848615424 Mar  5 12:48 OSPG.dat
>>>   293601280 Mar  1 12:46 OSPG.idn
>>> 8388608 Jan 24 16:59 OSP.idn
>>>25165824 Jan 24 17:06 POS.dat
>>>  4966055936 Mar  5 12:48 POSG.dat
>>>   276824064 Mar  5 12:38 POSG.idn
>>> 8388608 Jan 24 16:55 POS.idn
>>> 8388608 Jan 31 12:06 prefix2id.dat
>>> 8388608 Mar 15  2016 prefix2id.idn
>>>6771 Jan 31 12:06 prefixes.dat
>>>25165824 Jan 31 12:06 prefixIdx.dat
>>> 8388608 Jan  8 13:19 prefixIdx.idn
>>>33554432 Jan 24 17:06 SPO.dat
>>>  5075107840 Mar  5 12:48 SPOG.dat
>>>   369098752 Mar  5 12:48 SPOG.idn
>>> 8388608 Jan 24 17:04 SPO.idn
>>>4069 Nov  7 16:38 _stats.opt
>>>   4 Feb  6 12:01 tdb.lock
>>> 
>>> On 30.1.2018 15:04, Andy Seaborne wrote:
 These seem to be different errors.
 
 "In the middle of an alloc-write" is possibly a concurrency issue.
 "Failed to read" is possibly a previous corrupted database
 
 This is a text dataset? That should be using an MRSW lock to get some 
 level isolation.
 
 What's the Fuseki config in this case?
 
 Andy
 
 On 24/01/18 23:40, Chris Tomlinson wrote:
> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
> occurrences of this the other day while running some queries from the 
> fuseki browser app and with a database load going on with our own app 
> using:
> 
>  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
> 
> 
> with for the first model to transfer:
> 
>  DatasetAccessor putModel(graphName, m);
> 
> and for following models:
> 
>  static void addToTransferBulk(final String graphName, final 
> Model m) {
>  if (currentDataset == null)
>  currentDataset = DatasetFactory.createGeneral();
>  currentDataset.addNamedModel(graphName, m);
>  triplesInDataset += 

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Mikael Pesonen


We are using GSP and our test script is doing ~20 json-ld inserts and 
sparql updates in a row ASAP, and we are running 10 test scripts 
concurrently. This test is failing now.



On 5.3.2018 16:51, ajs6f wrote:

"fairly high load and concurrent usage"

This is not a very precise or reproducible measure.

Many sites use Jena in production at all kinds of scales for all kinds of 
dimensions, including HA setups. If you can explain more about your specific 
situation, you will get more useful advice.

ajs6f


On Mar 5, 2018, at 9:45 AM, Mikael Pesonen  wrote:


To be clear: can Jena be recommended for production database in our customer 
cases for fairly high load and concurrent usage? Or is it mainly for scientific 
purposes?

Br

On 5.3.2018 16:41, ajs6f wrote:

To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f


On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What would be 
preferred way to rebuild data?

Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
6771 Jan 31 12:06 prefixes.dat
25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some level 
isolation.

What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final Model m) {
  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.no

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Andy Seaborne

You can't reliably copy the database files while the server s running.

To backup, dump in Trig or N-Quads : "GET http://HOST/DATABASE";

Andy

On 05/03/18 14:54, Mikael Pesonen wrote:


I understood that content has to be rebuilt somehow, just copying files 
from backup is not enough? And how do we know how old backup is needed, 
e.g. when the corruption did happen?



On 5.3.2018 16:50, ajs6f wrote:
I don't understand-- if you have known-good TDB backups available, why 
would you not start with them?


Dumping RDF files on the side is not a bad idea either, but TDB 
backups (such as are produced by the software itself) should be fine 
for most disaster-recovery purposes.


ajs6f

On Mar 5, 2018, at 9:48 AM, Mikael Pesonen 
 wrote:



Fortunately this is our development site. But backups are from TDP 
files, we dont have a plan for dumping RDF exports for separate 
backup. So I guess we should add that as next step?


On 5.3.2018 16:45, ajs6f wrote:
Not a problem at all. Many sites use TDB as their main store. Just 
like _any_ database, proper operation includes regular and frequent 
backups and a plan for rebuilding the store independently of any 
instance.


You _do_ have backups, right?

ajs6f

On Mar 5, 2018, at 9:43 AM, Mikael Pesonen 
 wrote:



Hi,

we are using Jena TDP as THE database for document metadata. Data 
is fed through our custom REST API. Is this something that would no 
be recommended?


Br,


On 5.3.2018 16:41, ajs6f wrote:
To my knowledge (Andy of course is the TDB expert) you can't 
really rebuild a TDB instance from a corrupted TDB instance. You 
should start with a known-good backup or original RDF files.


ajs6f

On Mar 5, 2018, at 9:32 AM, Mikael Pesonen 
 wrote:



Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. 
What would be preferred way to rebuild data?


Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
    33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
    25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
    6771 Jan 31 12:06 prefixes.dat
    25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
    33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
    4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get 
some level isolation.


What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:
On the latest 3.7.0-Snapshot (master branch) I also saw 
repeated occurrences of this the other day while running some 
queries from the fuseki browser app and with a database load 
going on with our own app using:


  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, 
final Model m) {

  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some 
queries from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of 
an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 

 at 
org.apach

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Mikael Pesonen


I understood that content has to be rebuilt somehow, just copying files 
from backup is not enough? And how do we know how old backup is needed, 
e.g. when the corruption did happen?



On 5.3.2018 16:50, ajs6f wrote:

I don't understand-- if you have known-good TDB backups available, why would 
you not start with them?

Dumping RDF files on the side is not a bad idea either, but TDB backups (such 
as are produced by the software itself) should be fine for most 
disaster-recovery purposes.

ajs6f


On Mar 5, 2018, at 9:48 AM, Mikael Pesonen  wrote:


Fortunately this is our development site. But backups are from TDP files, we 
dont have a plan for dumping RDF exports for separate backup. So I guess we 
should add that as next step?

On 5.3.2018 16:45, ajs6f wrote:

Not a problem at all. Many sites use TDB as their main store. Just like _any_ 
database, proper operation includes regular and frequent backups and a plan for 
rebuilding the store independently of any instance.

You _do_ have backups, right?

ajs6f


On Mar 5, 2018, at 9:43 AM, Mikael Pesonen  wrote:


Hi,

we are using Jena TDP as THE database for document metadata. Data is fed 
through our custom REST API. Is this something that would no be recommended?

Br,


On 5.3.2018 16:41, ajs6f wrote:

To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f


On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What would be 
preferred way to rebuild data?

Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
6771 Jan 31 12:06 prefixes.dat
25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some level 
isolation.

What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final Model m) {
  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNati

Re: Fuseki errors with concurrent requests

2018-03-05 Thread ajs6f
"fairly high load and concurrent usage"

This is not a very precise or reproducible measure.

Many sites use Jena in production at all kinds of scales for all kinds of 
dimensions, including HA setups. If you can explain more about your specific 
situation, you will get more useful advice.

ajs6f

> On Mar 5, 2018, at 9:45 AM, Mikael Pesonen  wrote:
> 
> 
> To be clear: can Jena be recommended for production database in our customer 
> cases for fairly high load and concurrent usage? Or is it mainly for 
> scientific purposes?
> 
> Br
> 
> On 5.3.2018 16:41, ajs6f wrote:
>> To my knowledge (Andy of course is the TDB expert) you can't really rebuild 
>> a TDB instance from a corrupted TDB instance. You should start with a 
>> known-good backup or original RDF files.
>> 
>> ajs6f
>> 
>>> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  
>>> wrote:
>>> 
>>> 
>>> Still having these issues on all of our installations.
>>> 
>>> I'm going to rule out corrupted database on our oldest server. What would 
>>> be preferred way to rebuild data?
>>> 
>>> Data folder:
>>> 
>>>  5226102784 Mar  5 12:48 GOSP.dat
>>>   260046848 Mar  5 12:48 GOSP.idn
>>>  5377097728 Mar  5 12:48 GPOS.dat
>>>   268435456 Mar  5 12:48 GPOS.idn
>>>  5486149632 Mar  5 12:48 GSPO.dat
>>>   285212672 Mar  5 12:48 GSPO.idn
>>>   0 Mar  5 12:48 journal.jrnl
>>>   545259520 Mar  5 12:38 node2id.dat
>>>   150994944 Feb 20 16:32 node2id.idn
>>>   497658012 Mar  5 12:38 nodes.dat
>>>   1 Nov 14 15:27 none.opt
>>>33554432 Jan 24 17:06 OSP.dat
>>>  4848615424 Mar  5 12:48 OSPG.dat
>>>   293601280 Mar  1 12:46 OSPG.idn
>>> 8388608 Jan 24 16:59 OSP.idn
>>>25165824 Jan 24 17:06 POS.dat
>>>  4966055936 Mar  5 12:48 POSG.dat
>>>   276824064 Mar  5 12:38 POSG.idn
>>> 8388608 Jan 24 16:55 POS.idn
>>> 8388608 Jan 31 12:06 prefix2id.dat
>>> 8388608 Mar 15  2016 prefix2id.idn
>>>6771 Jan 31 12:06 prefixes.dat
>>>25165824 Jan 31 12:06 prefixIdx.dat
>>> 8388608 Jan  8 13:19 prefixIdx.idn
>>>33554432 Jan 24 17:06 SPO.dat
>>>  5075107840 Mar  5 12:48 SPOG.dat
>>>   369098752 Mar  5 12:48 SPOG.idn
>>> 8388608 Jan 24 17:04 SPO.idn
>>>4069 Nov  7 16:38 _stats.opt
>>>   4 Feb  6 12:01 tdb.lock
>>> 
>>> On 30.1.2018 15:04, Andy Seaborne wrote:
 These seem to be different errors.
 
 "In the middle of an alloc-write" is possibly a concurrency issue.
 "Failed to read" is possibly a previous corrupted database
 
 This is a text dataset? That should be using an MRSW lock to get some 
 level isolation.
 
 What's the Fuseki config in this case?
 
 Andy
 
 On 24/01/18 23:40, Chris Tomlinson wrote:
> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
> occurrences of this the other day while running some queries from the 
> fuseki browser app and with a database load going on with our own app 
> using:
> 
>  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
> 
> 
> with for the first model to transfer:
> 
>  DatasetAccessor putModel(graphName, m);
> 
> and for following models:
> 
>  static void addToTransferBulk(final String graphName, final Model m) 
> {
>  if (currentDataset == null)
>  currentDataset = DatasetFactory.createGeneral();
>  currentDataset.addNamedModel(graphName, m);
>  triplesInDataset += m.size();
>  if (triplesInDataset > initialLoadBulkSize) {
>  try {
>  loadDatasetMutex(currentDataset);
>  currentDataset = null;
>  triplesInDataset = 0;
>  } catch (TimeoutException e) {
>  e.printStackTrace();
>  return;
>  }
>  }
>  }
> 
> as I say the exceptions appeared while I was running some queries from 
> from the fuseki browser app:
> 
>> [2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
>> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
>> http://localhost:13180/fuseki/bdrcrw
>> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
>> org.apache.jena.tdb.base.file.FileException: In the middle of an 
>> alloc-write
>> at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>> at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>> at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>> at 
>>>

Re: Fuseki errors with concurrent requests

2018-03-05 Thread ajs6f
I don't understand-- if you have known-good TDB backups available, why would 
you not start with them?

Dumping RDF files on the side is not a bad idea either, but TDB backups (such 
as are produced by the software itself) should be fine for most 
disaster-recovery purposes.

ajs6f

> On Mar 5, 2018, at 9:48 AM, Mikael Pesonen  wrote:
> 
> 
> Fortunately this is our development site. But backups are from TDP files, we 
> dont have a plan for dumping RDF exports for separate backup. So I guess we 
> should add that as next step?
> 
> On 5.3.2018 16:45, ajs6f wrote:
>> Not a problem at all. Many sites use TDB as their main store. Just like 
>> _any_ database, proper operation includes regular and frequent backups and a 
>> plan for rebuilding the store independently of any instance.
>> 
>> You _do_ have backups, right?
>> 
>> ajs6f
>> 
>>> On Mar 5, 2018, at 9:43 AM, Mikael Pesonen  
>>> wrote:
>>> 
>>> 
>>> Hi,
>>> 
>>> we are using Jena TDP as THE database for document metadata. Data is fed 
>>> through our custom REST API. Is this something that would no be recommended?
>>> 
>>> Br,
>>> 
>>> 
>>> On 5.3.2018 16:41, ajs6f wrote:
 To my knowledge (Andy of course is the TDB expert) you can't really 
 rebuild a TDB instance from a corrupted TDB instance. You should start 
 with a known-good backup or original RDF files.
 
 ajs6f
 
> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  
> wrote:
> 
> 
> Still having these issues on all of our installations.
> 
> I'm going to rule out corrupted database on our oldest server. What would 
> be preferred way to rebuild data?
> 
> Data folder:
> 
>  5226102784 Mar  5 12:48 GOSP.dat
>   260046848 Mar  5 12:48 GOSP.idn
>  5377097728 Mar  5 12:48 GPOS.dat
>   268435456 Mar  5 12:48 GPOS.idn
>  5486149632 Mar  5 12:48 GSPO.dat
>   285212672 Mar  5 12:48 GSPO.idn
>   0 Mar  5 12:48 journal.jrnl
>   545259520 Mar  5 12:38 node2id.dat
>   150994944 Feb 20 16:32 node2id.idn
>   497658012 Mar  5 12:38 nodes.dat
>   1 Nov 14 15:27 none.opt
>33554432 Jan 24 17:06 OSP.dat
>  4848615424 Mar  5 12:48 OSPG.dat
>   293601280 Mar  1 12:46 OSPG.idn
> 8388608 Jan 24 16:59 OSP.idn
>25165824 Jan 24 17:06 POS.dat
>  4966055936 Mar  5 12:48 POSG.dat
>   276824064 Mar  5 12:38 POSG.idn
> 8388608 Jan 24 16:55 POS.idn
> 8388608 Jan 31 12:06 prefix2id.dat
> 8388608 Mar 15  2016 prefix2id.idn
>6771 Jan 31 12:06 prefixes.dat
>25165824 Jan 31 12:06 prefixIdx.dat
> 8388608 Jan  8 13:19 prefixIdx.idn
>33554432 Jan 24 17:06 SPO.dat
>  5075107840 Mar  5 12:48 SPOG.dat
>   369098752 Mar  5 12:48 SPOG.idn
> 8388608 Jan 24 17:04 SPO.idn
>4069 Nov  7 16:38 _stats.opt
>   4 Feb  6 12:01 tdb.lock
> 
> On 30.1.2018 15:04, Andy Seaborne wrote:
>> These seem to be different errors.
>> 
>> "In the middle of an alloc-write" is possibly a concurrency issue.
>> "Failed to read" is possibly a previous corrupted database
>> 
>> This is a text dataset? That should be using an MRSW lock to get some 
>> level isolation.
>> 
>> What's the Fuseki config in this case?
>> 
>> Andy
>> 
>> On 24/01/18 23:40, Chris Tomlinson wrote:
>>> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
>>> occurrences of this the other day while running some queries from the 
>>> fuseki browser app and with a database load going on with our own app 
>>> using:
>>> 
>>>  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
>>> 
>>> 
>>> with for the first model to transfer:
>>> 
>>>  DatasetAccessor putModel(graphName, m);
>>> 
>>> and for following models:
>>> 
>>>  static void addToTransferBulk(final String graphName, final Model 
>>> m) {
>>>  if (currentDataset == null)
>>>  currentDataset = DatasetFactory.createGeneral();
>>>  currentDataset.addNamedModel(graphName, m);
>>>  triplesInDataset += m.size();
>>>  if (triplesInDataset > initialLoadBulkSize) {
>>>  try {
>>>  loadDatasetMutex(currentDataset);
>>>  currentDataset = null;
>>>  triplesInDataset = 0;
>>>  } catch (TimeoutException e) {
>>>  e.printStackTrace();
>>>  return;
>>>  }
>>>  }
>>>  }
>>> 
>>> as I say the exceptions appeared while I was running some queries from 
>>> from the fuseki browser app:
>>> 
 [2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
 [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
 http://localhost:13180/fuseki/bdrcrw
 [2018-01-22 16:25:03] BindingTDB ERRO

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Mikael Pesonen


Fortunately this is our development site. But backups are from TDP 
files, we dont have a plan for dumping RDF exports for separate backup. 
So I guess we should add that as next step?


On 5.3.2018 16:45, ajs6f wrote:

Not a problem at all. Many sites use TDB as their main store. Just like _any_ 
database, proper operation includes regular and frequent backups and a plan for 
rebuilding the store independently of any instance.

You _do_ have backups, right?

ajs6f


On Mar 5, 2018, at 9:43 AM, Mikael Pesonen  wrote:


Hi,

we are using Jena TDP as THE database for document metadata. Data is fed 
through our custom REST API. Is this something that would no be recommended?

Br,


On 5.3.2018 16:41, ajs6f wrote:

To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f


On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What would be 
preferred way to rebuild data?

Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
6771 Jan 31 12:06 prefixes.dat
25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some level 
isolation.

What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final Model m) {
  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableIn

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Mikael Pesonen


To be clear: can Jena be recommended for production database in our 
customer cases for fairly high load and concurrent usage? Or is it 
mainly for scientific purposes?


Br

On 5.3.2018 16:41, ajs6f wrote:

To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f


On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What would be 
preferred way to rebuild data?

Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
6771 Jan 31 12:06 prefixes.dat
25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some level 
isolation.

What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final Model m) {
  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
 at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
 at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
 at org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_Lo

Re: Fuseki errors with concurrent requests

2018-03-05 Thread ajs6f
Not a problem at all. Many sites use TDB as their main store. Just like _any_ 
database, proper operation includes regular and frequent backups and a plan for 
rebuilding the store independently of any instance. 

You _do_ have backups, right?

ajs6f

> On Mar 5, 2018, at 9:43 AM, Mikael Pesonen  wrote:
> 
> 
> Hi,
> 
> we are using Jena TDP as THE database for document metadata. Data is fed 
> through our custom REST API. Is this something that would no be recommended?
> 
> Br,
> 
> 
> On 5.3.2018 16:41, ajs6f wrote:
>> To my knowledge (Andy of course is the TDB expert) you can't really rebuild 
>> a TDB instance from a corrupted TDB instance. You should start with a 
>> known-good backup or original RDF files.
>> 
>> ajs6f
>> 
>>> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  
>>> wrote:
>>> 
>>> 
>>> Still having these issues on all of our installations.
>>> 
>>> I'm going to rule out corrupted database on our oldest server. What would 
>>> be preferred way to rebuild data?
>>> 
>>> Data folder:
>>> 
>>>  5226102784 Mar  5 12:48 GOSP.dat
>>>   260046848 Mar  5 12:48 GOSP.idn
>>>  5377097728 Mar  5 12:48 GPOS.dat
>>>   268435456 Mar  5 12:48 GPOS.idn
>>>  5486149632 Mar  5 12:48 GSPO.dat
>>>   285212672 Mar  5 12:48 GSPO.idn
>>>   0 Mar  5 12:48 journal.jrnl
>>>   545259520 Mar  5 12:38 node2id.dat
>>>   150994944 Feb 20 16:32 node2id.idn
>>>   497658012 Mar  5 12:38 nodes.dat
>>>   1 Nov 14 15:27 none.opt
>>>33554432 Jan 24 17:06 OSP.dat
>>>  4848615424 Mar  5 12:48 OSPG.dat
>>>   293601280 Mar  1 12:46 OSPG.idn
>>> 8388608 Jan 24 16:59 OSP.idn
>>>25165824 Jan 24 17:06 POS.dat
>>>  4966055936 Mar  5 12:48 POSG.dat
>>>   276824064 Mar  5 12:38 POSG.idn
>>> 8388608 Jan 24 16:55 POS.idn
>>> 8388608 Jan 31 12:06 prefix2id.dat
>>> 8388608 Mar 15  2016 prefix2id.idn
>>>6771 Jan 31 12:06 prefixes.dat
>>>25165824 Jan 31 12:06 prefixIdx.dat
>>> 8388608 Jan  8 13:19 prefixIdx.idn
>>>33554432 Jan 24 17:06 SPO.dat
>>>  5075107840 Mar  5 12:48 SPOG.dat
>>>   369098752 Mar  5 12:48 SPOG.idn
>>> 8388608 Jan 24 17:04 SPO.idn
>>>4069 Nov  7 16:38 _stats.opt
>>>   4 Feb  6 12:01 tdb.lock
>>> 
>>> On 30.1.2018 15:04, Andy Seaborne wrote:
 These seem to be different errors.
 
 "In the middle of an alloc-write" is possibly a concurrency issue.
 "Failed to read" is possibly a previous corrupted database
 
 This is a text dataset? That should be using an MRSW lock to get some 
 level isolation.
 
 What's the Fuseki config in this case?
 
 Andy
 
 On 24/01/18 23:40, Chris Tomlinson wrote:
> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
> occurrences of this the other day while running some queries from the 
> fuseki browser app and with a database load going on with our own app 
> using:
> 
>  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
> 
> 
> with for the first model to transfer:
> 
>  DatasetAccessor putModel(graphName, m);
> 
> and for following models:
> 
>  static void addToTransferBulk(final String graphName, final Model m) 
> {
>  if (currentDataset == null)
>  currentDataset = DatasetFactory.createGeneral();
>  currentDataset.addNamedModel(graphName, m);
>  triplesInDataset += m.size();
>  if (triplesInDataset > initialLoadBulkSize) {
>  try {
>  loadDatasetMutex(currentDataset);
>  currentDataset = null;
>  triplesInDataset = 0;
>  } catch (TimeoutException e) {
>  e.printStackTrace();
>  return;
>  }
>  }
>  }
> 
> as I say the exceptions appeared while I was running some queries from 
> from the fuseki browser app:
> 
>> [2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
>> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
>> http://localhost:13180/fuseki/bdrcrw
>> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
>> org.apache.jena.tdb.base.file.FileException: In the middle of an 
>> alloc-write
>> at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>> at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>> at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>> at 
>> org.apache.jena.tdb.store.nodetable.NodeTableCache

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Mikael Pesonen


Hi,

we are using Jena TDP as THE database for document metadata. Data is fed 
through our custom REST API. Is this something that would no be recommended?


Br,


On 5.3.2018 16:41, ajs6f wrote:

To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f


On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What would be 
preferred way to rebuild data?

Data folder:

  5226102784 Mar  5 12:48 GOSP.dat
   260046848 Mar  5 12:48 GOSP.idn
  5377097728 Mar  5 12:48 GPOS.dat
   268435456 Mar  5 12:48 GPOS.idn
  5486149632 Mar  5 12:48 GSPO.dat
   285212672 Mar  5 12:48 GSPO.idn
   0 Mar  5 12:48 journal.jrnl
   545259520 Mar  5 12:38 node2id.dat
   150994944 Feb 20 16:32 node2id.idn
   497658012 Mar  5 12:38 nodes.dat
   1 Nov 14 15:27 none.opt
33554432 Jan 24 17:06 OSP.dat
  4848615424 Mar  5 12:48 OSPG.dat
   293601280 Mar  1 12:46 OSPG.idn
 8388608 Jan 24 16:59 OSP.idn
25165824 Jan 24 17:06 POS.dat
  4966055936 Mar  5 12:48 POSG.dat
   276824064 Mar  5 12:38 POSG.idn
 8388608 Jan 24 16:55 POS.idn
 8388608 Jan 31 12:06 prefix2id.dat
 8388608 Mar 15  2016 prefix2id.idn
6771 Jan 31 12:06 prefixes.dat
25165824 Jan 31 12:06 prefixIdx.dat
 8388608 Jan  8 13:19 prefixIdx.idn
33554432 Jan 24 17:06 SPO.dat
  5075107840 Mar  5 12:48 SPOG.dat
   369098752 Mar  5 12:48 SPOG.idn
 8388608 Jan 24 17:04 SPO.idn
4069 Nov  7 16:38 _stats.opt
   4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some level 
isolation.

What's the Fuseki config in this case?

 Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

  DatasetAccessor putModel(graphName, m);

and for following models:

  static void addToTransferBulk(final String graphName, final Model m) {
  if (currentDataset == null)
  currentDataset = DatasetFactory.createGeneral();
  currentDataset.addNamedModel(graphName, m);
  triplesInDataset += m.size();
  if (triplesInDataset > initialLoadBulkSize) {
  try {
  loadDatasetMutex(currentDataset);
  currentDataset = null;
  triplesInDataset = 0;
  } catch (TimeoutException e) {
  e.printStackTrace();
  return;
  }
  }
  }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
 at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
 at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
 at org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.jav

Re: Fuseki errors with concurrent requests

2018-03-05 Thread ajs6f
To my knowledge (Andy of course is the TDB expert) you can't really rebuild a 
TDB instance from a corrupted TDB instance. You should start with a known-good 
backup or original RDF files.

ajs6f

> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen  wrote:
> 
> 
> Still having these issues on all of our installations.
> 
> I'm going to rule out corrupted database on our oldest server. What would be 
> preferred way to rebuild data?
> 
> Data folder:
> 
>  5226102784 Mar  5 12:48 GOSP.dat
>   260046848 Mar  5 12:48 GOSP.idn
>  5377097728 Mar  5 12:48 GPOS.dat
>   268435456 Mar  5 12:48 GPOS.idn
>  5486149632 Mar  5 12:48 GSPO.dat
>   285212672 Mar  5 12:48 GSPO.idn
>   0 Mar  5 12:48 journal.jrnl
>   545259520 Mar  5 12:38 node2id.dat
>   150994944 Feb 20 16:32 node2id.idn
>   497658012 Mar  5 12:38 nodes.dat
>   1 Nov 14 15:27 none.opt
>33554432 Jan 24 17:06 OSP.dat
>  4848615424 Mar  5 12:48 OSPG.dat
>   293601280 Mar  1 12:46 OSPG.idn
> 8388608 Jan 24 16:59 OSP.idn
>25165824 Jan 24 17:06 POS.dat
>  4966055936 Mar  5 12:48 POSG.dat
>   276824064 Mar  5 12:38 POSG.idn
> 8388608 Jan 24 16:55 POS.idn
> 8388608 Jan 31 12:06 prefix2id.dat
> 8388608 Mar 15  2016 prefix2id.idn
>6771 Jan 31 12:06 prefixes.dat
>25165824 Jan 31 12:06 prefixIdx.dat
> 8388608 Jan  8 13:19 prefixIdx.idn
>33554432 Jan 24 17:06 SPO.dat
>  5075107840 Mar  5 12:48 SPOG.dat
>   369098752 Mar  5 12:48 SPOG.idn
> 8388608 Jan 24 17:04 SPO.idn
>4069 Nov  7 16:38 _stats.opt
>   4 Feb  6 12:01 tdb.lock
> 
> On 30.1.2018 15:04, Andy Seaborne wrote:
>> These seem to be different errors.
>> 
>> "In the middle of an alloc-write" is possibly a concurrency issue.
>> "Failed to read" is possibly a previous corrupted database
>> 
>> This is a text dataset? That should be using an MRSW lock to get some level 
>> isolation.
>> 
>> What's the Fuseki config in this case?
>> 
>> Andy
>> 
>> On 24/01/18 23:40, Chris Tomlinson wrote:
>>> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
>>> occurrences of this the other day while running some queries from the 
>>> fuseki browser app and with a database load going on with our own app using:
>>> 
>>>  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
>>> 
>>> 
>>> with for the first model to transfer:
>>> 
>>>  DatasetAccessor putModel(graphName, m);
>>> 
>>> and for following models:
>>> 
>>>  static void addToTransferBulk(final String graphName, final Model m) {
>>>  if (currentDataset == null)
>>>  currentDataset = DatasetFactory.createGeneral();
>>>  currentDataset.addNamedModel(graphName, m);
>>>  triplesInDataset += m.size();
>>>  if (triplesInDataset > initialLoadBulkSize) {
>>>  try {
>>>  loadDatasetMutex(currentDataset);
>>>  currentDataset = null;
>>>  triplesInDataset = 0;
>>>  } catch (TimeoutException e) {
>>>  e.printStackTrace();
>>>  return;
>>>  }
>>>  }
>>>  }
>>> 
>>> as I say the exceptions appeared while I was running some queries from from 
>>> the fuseki browser app:
>>> 
 [2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
 [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
 http://localhost:13180/fuseki/bdrcrw
 [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
 org.apache.jena.tdb.base.file.FileException: In the middle of an 
 alloc-write
 at 
 org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 at 
 org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
 org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
 org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
 at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
 at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
 

Re: Fuseki errors with concurrent requests

2018-03-05 Thread Mikael Pesonen


Still having these issues on all of our installations.

I'm going to rule out corrupted database on our oldest server. What 
would be preferred way to rebuild data?


Data folder:

 5226102784 Mar  5 12:48 GOSP.dat
  260046848 Mar  5 12:48 GOSP.idn
 5377097728 Mar  5 12:48 GPOS.dat
  268435456 Mar  5 12:48 GPOS.idn
 5486149632 Mar  5 12:48 GSPO.dat
  285212672 Mar  5 12:48 GSPO.idn
  0 Mar  5 12:48 journal.jrnl
  545259520 Mar  5 12:38 node2id.dat
  150994944 Feb 20 16:32 node2id.idn
  497658012 Mar  5 12:38 nodes.dat
  1 Nov 14 15:27 none.opt
   33554432 Jan 24 17:06 OSP.dat
 4848615424 Mar  5 12:48 OSPG.dat
  293601280 Mar  1 12:46 OSPG.idn
    8388608 Jan 24 16:59 OSP.idn
   25165824 Jan 24 17:06 POS.dat
 4966055936 Mar  5 12:48 POSG.dat
  276824064 Mar  5 12:38 POSG.idn
    8388608 Jan 24 16:55 POS.idn
    8388608 Jan 31 12:06 prefix2id.dat
    8388608 Mar 15  2016 prefix2id.idn
   6771 Jan 31 12:06 prefixes.dat
   25165824 Jan 31 12:06 prefixIdx.dat
    8388608 Jan  8 13:19 prefixIdx.idn
   33554432 Jan 24 17:06 SPO.dat
 5075107840 Mar  5 12:48 SPOG.dat
  369098752 Mar  5 12:48 SPOG.idn
    8388608 Jan 24 17:04 SPO.idn
   4069 Nov  7 16:38 _stats.opt
  4 Feb  6 12:01 tdb.lock

On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some 
level isolation.


What's the Fuseki config in this case?

    Andy

On 24/01/18 23:40, Chris Tomlinson wrote:
On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from the 
fuseki browser app and with a database load going on with our own app 
using:


 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final 
Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)

at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)

at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)

at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58) 

at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at 
org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apach

Re: Fuseki errors with concurrent requests

2018-02-06 Thread Mikael Pesonen


Hi,

seems that building an isolated test setup for reproducing the issue is 
difficult. Would it be possible/helpful if we arrange an access to our 
server and the test script which does generate the error?


Mikael


On 30.1.2018 15:04, Andy Seaborne wrote:

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some 
level isolation.


What's the Fuseki config in this case?

    Andy

On 24/01/18 23:40, Chris Tomlinson wrote:
On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from the 
fuseki browser app and with a database load going on with our own app 
using:


 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final 
Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)

at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)

at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)

at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58) 

at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at 
org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConcat.hasNextBinding(QueryIterConcat.java:82)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.has

Re: Fuseki errors with concurrent requests

2018-01-30 Thread Mikael Pesonen


I can prepare one.

So to be clear, when using Apache web server -> multiple php processes 
-> Fuseki/GSP over HTTP, is that safe for read and write? Or does this 
configuration need an additional single queue for write operations?




On 30.1.2018 15:00, Andy Seaborne wrote:
Could you please turn this in to a standalone complete, minimal 
example?  AKA something with all the details that can be run by 
someone else including how the server is being run, what disk storage 
you are using, and whether the database starts from fresh or not.


Does it happen on earlier versions of Fuseki?

    Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically 
simultaneous calls to Apahe web server which loads php which calls 
Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW (multiple 
reader or single writer), that queuing of updates should be going on 
on the server-side.


I haven't had time to look at this issue, and it's difficult to say 
much without a reproducible phenomenon. Do you either of y'all have 
test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from 
the fuseki browser app and with a database load going on with our 
own app using:


 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final 
Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 


at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67) 

at 
org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121) 


at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76) 

at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58) 

at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72) 

at 
org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49) 

at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69) 

at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114) 

at 
org

Re: Fuseki errors with concurrent requests

2018-01-30 Thread Andy Seaborne

On 29/01/18 14:56, ajs6f wrote:

TDB1 is MRSW (multiple reader or single writer)


TDB1 is multiple reader and single writer

> it's difficult to say much without a reproducible phenomenon

Yes - I can't see any specific clues here so far.

Andy


Re: Fuseki errors with concurrent requests

2018-01-30 Thread Andy Seaborne

These seem to be different errors.

"In the middle of an alloc-write" is possibly a concurrency issue.
"Failed to read" is possibly a previous corrupted database

This is a text dataset? That should be using an MRSW lock to get some 
level isolation.


What's the Fuseki config in this case?

Andy

On 24/01/18 23:40, Chris Tomlinson wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final Model m) {
 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConcat.hasNextBinding(QueryIterConcat.java:82)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistinct.java:104

Re: Fuseki errors with concurrent requests

2018-01-30 Thread Andy Seaborne
Could you please turn this in to a standalone complete, minimal example? 
 AKA something with all the details that can be run by someone else 
including how the server is being run, what disk storage you are using, 
and whether the database starts from fresh or not.


Does it happen on earlier versions of Fuseki?

Andy



On 30/01/18 09:28, Mikael Pesonen wrote:


Hi,

My test is dependent of REST API we developed. So basically simultaneous 
calls to Apahe web server which loads php which calls Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:
That might be worth trying, although since TDB1 is MRSW (multiple 
reader or single writer), that queuing of updates should be going on 
on the server-side.


I haven't had time to look at this issue, and it's difficult to say 
much without a reproducible phenomenon. Do you either of y'all have 
test code we can use to demonstrate this?


ajs6f

On Jan 29, 2018, at 5:43 AM, Mikael Pesonen 
 wrote:



Until better solution, quick one would be to put all operations 
through a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl 



Chris

On Jan 24, 2018, at 17:40, Chris Tomlinson 
 wrote:


On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
occurrences of this the other day while running some queries from 
the fuseki browser app and with a database load going on with our 
own app using:


 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final 
Model m) {

 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries 
from from the fuseki browser app:



[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw

[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an 
alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311) 

at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57) 


at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50) 

at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67) 

at 
org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121) 


at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58) 

at 
org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at 
org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49) 

at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69) 

at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114) 

at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66) 

at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114) 

at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBind

Re: Fuseki errors with concurrent requests

2018-01-30 Thread Mikael Pesonen


Hi,

My test is dependent of REST API we developed. So basically simultaneous 
calls to Apahe web server which loads php which calls Fuseki using curl.




On 29.1.2018 16:56, ajs6f wrote:

That might be worth trying, although since TDB1 is MRSW (multiple reader or 
single writer), that queuing of updates should be going on on the server-side.

I haven't had time to look at this issue, and it's difficult to say much 
without a reproducible phenomenon. Do you either of y'all have test code we can 
use to demonstrate this?

ajs6f


On Jan 29, 2018, at 5:43 AM, Mikael Pesonen  wrote:


Until better solution, quick one would be to put all operations through a 
single queue?

Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl

Chris


On Jan 24, 2018, at 17:40, Chris Tomlinson  wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final Model m) {
 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConcat.hasNextBinding(QueryIterConcat.java:82)
at 
org.apache.jena.sparql.engine.iterator.QueryIt

Re: Fuseki errors with concurrent requests

2018-01-29 Thread Chris Tomlinson
Yes the loading is single-threaded. Its a simple app that performs bulk loading 
using:

DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


and for the first model to transfer:

DatasetAccessor putModel(graphName, m);

and for following models:

static void addToTransferBulk(final String graphName, final Model m) {
if (currentDataset == null)
currentDataset = DatasetFactory.createGeneral();
currentDataset.addNamedModel(graphName, m);
triplesInDataset += m.size();
if (triplesInDataset > initialLoadBulkSize) {
try {
loadDatasetMutex(currentDataset);
currentDataset = null;
triplesInDataset = 0;
} catch (TimeoutException e) {
e.printStackTrace();
return;
}
}
}


> On Jan 29, 2018, at 9:30 AM, ajs6f  wrote:
> 
> That (using a queue) would depend on what you mean by "a database load going 
> on with our own app". Are you doing those updates singlethreaded?
> 
> ajs6f
> 
>> On Jan 29, 2018, at 10:27 AM, Chris Tomlinson  
>> wrote:
>> 
>> I don’t have any test code per se. I was running a load tool that we wrote 
>> (code snippets included) and issuing a few simple sparql’s via the fuseki 
>> browser app. That’s the extent of the test harness.
>> 
>> I don’t see how the set up that I described would make use of a queue.
>> 
>> Chris
>> 
>> 
>>> On Jan 29, 2018, at 8:56 AM, ajs6f  wrote:
>>> 
>>> That might be worth trying, although since TDB1 is MRSW (multiple reader or 
>>> single writer), that queuing of updates should be going on on the 
>>> server-side.
>>> 
>>> I haven't had time to look at this issue, and it's difficult to say much 
>>> without a reproducible phenomenon. Do you either of y'all have test code we 
>>> can use to demonstrate this?
>>> 
>>> ajs6f
>>> 
 On Jan 29, 2018, at 5:43 AM, Mikael Pesonen  
 wrote:
 
 
 Until better solution, quick one would be to put all operations through a 
 single queue?
 
 Br
 
 On 25.1.2018 4:11, Chris Tomlinson wrote:
> Also,
> 
> Here's a link to the fuseki config:
> 
> https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl
> 
> Chris
> 
>> On Jan 24, 2018, at 17:40, Chris Tomlinson  
>> wrote:
>> 
>> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
>> occurrences of this the other day while running some queries from the 
>> fuseki browser app and with a database load going on with our own app 
>> using:
>> 
>>  DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
>> 
>> 
>> with for the first model to transfer:
>> 
>>  DatasetAccessor putModel(graphName, m);
>> 
>> and for following models:
>> 
>>  static void addToTransferBulk(final String graphName, final Model m) {
>>  if (currentDataset == null)
>>  currentDataset = DatasetFactory.createGeneral();
>>  currentDataset.addNamedModel(graphName, m);
>>  triplesInDataset += m.size();
>>  if (triplesInDataset > initialLoadBulkSize) {
>>  try {
>>  loadDatasetMutex(currentDataset);
>>  currentDataset = null;
>>  triplesInDataset = 0;
>>  } catch (TimeoutException e) {
>>  e.printStackTrace();
>>  return;
>>  }
>>  }
>>  }
>> 
>> as I say the exceptions appeared while I was running some queries from 
>> from the fuseki browser app:
>> 
>>> [2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
>>> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
>>> http://localhost:13180/fuseki/bdrcrw
>>> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
>>> org.apache.jena.tdb.base.file.FileException: In the middle of an 
>>> alloc-write
>>> at 
>>> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>>> at 
>>> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>>> at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>>> at 
>>> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>>> at 
>>> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>>> at 
>>> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>>> at 
>>> org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
>>> at 
>>> org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
>>> at 
>>> org.apache.jena.tdb.store.nodetable.NodeTable

Re: Fuseki errors with concurrent requests

2018-01-29 Thread ajs6f
That (using a queue) would depend on what you mean by "a database load going on 
with our own app". Are you doing those updates singlethreaded?

ajs6f

> On Jan 29, 2018, at 10:27 AM, Chris Tomlinson  
> wrote:
> 
> I don’t have any test code per se. I was running a load tool that we wrote 
> (code snippets included) and issuing a few simple sparql’s via the fuseki 
> browser app. That’s the extent of the test harness.
> 
> I don’t see how the set up that I described would make use of a queue.
> 
> Chris
> 
> 
>> On Jan 29, 2018, at 8:56 AM, ajs6f  wrote:
>> 
>> That might be worth trying, although since TDB1 is MRSW (multiple reader or 
>> single writer), that queuing of updates should be going on on the 
>> server-side.
>> 
>> I haven't had time to look at this issue, and it's difficult to say much 
>> without a reproducible phenomenon. Do you either of y'all have test code we 
>> can use to demonstrate this?
>> 
>> ajs6f
>> 
>>> On Jan 29, 2018, at 5:43 AM, Mikael Pesonen  
>>> wrote:
>>> 
>>> 
>>> Until better solution, quick one would be to put all operations through a 
>>> single queue?
>>> 
>>> Br
>>> 
>>> On 25.1.2018 4:11, Chris Tomlinson wrote:
 Also,
 
 Here's a link to the fuseki config:
 
 https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl
 
 Chris
 
> On Jan 24, 2018, at 17:40, Chris Tomlinson  
> wrote:
> 
> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
> occurrences of this the other day while running some queries from the 
> fuseki browser app and with a database load going on with our own app 
> using:
> 
>   DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
> 
> 
> with for the first model to transfer:
> 
>   DatasetAccessor putModel(graphName, m);
> 
> and for following models:
> 
>   static void addToTransferBulk(final String graphName, final Model m) {
>   if (currentDataset == null)
>   currentDataset = DatasetFactory.createGeneral();
>   currentDataset.addNamedModel(graphName, m);
>   triplesInDataset += m.size();
>   if (triplesInDataset > initialLoadBulkSize) {
>   try {
>   loadDatasetMutex(currentDataset);
>   currentDataset = null;
>   triplesInDataset = 0;
>   } catch (TimeoutException e) {
>   e.printStackTrace();
>   return;
>   }
>   }
>   }
> 
> as I say the exceptions appeared while I was running some queries from 
> from the fuseki browser app:
> 
>> [2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
>> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
>> http://localhost:13180/fuseki/bdrcrw
>> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
>> org.apache.jena.tdb.base.file.FileException: In the middle of an 
>> alloc-write
>>  at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>>  at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>>  at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
>>  at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
>>  at 
>> org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
>>  at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
>>  at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
>>  at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
>>  at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
>>  at 
>> org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
>>  at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
>>  at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterPro

Re: Fuseki errors with concurrent requests

2018-01-29 Thread Chris Tomlinson
I don’t have any test code per se. I was running a load tool that we wrote 
(code snippets included) and issuing a few simple sparql’s via the fuseki 
browser app. That’s the extent of the test harness.

I don’t see how the set up that I described would make use of a queue.

Chris


> On Jan 29, 2018, at 8:56 AM, ajs6f  wrote:
> 
> That might be worth trying, although since TDB1 is MRSW (multiple reader or 
> single writer), that queuing of updates should be going on on the server-side.
> 
> I haven't had time to look at this issue, and it's difficult to say much 
> without a reproducible phenomenon. Do you either of y'all have test code we 
> can use to demonstrate this?
> 
> ajs6f
> 
>> On Jan 29, 2018, at 5:43 AM, Mikael Pesonen  
>> wrote:
>> 
>> 
>> Until better solution, quick one would be to put all operations through a 
>> single queue?
>> 
>> Br
>> 
>> On 25.1.2018 4:11, Chris Tomlinson wrote:
>>> Also,
>>> 
>>> Here's a link to the fuseki config:
>>> 
>>> https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl
>>> 
>>> Chris
>>> 
 On Jan 24, 2018, at 17:40, Chris Tomlinson  
 wrote:
 
 On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
 occurrences of this the other day while running some queries from the 
 fuseki browser app and with a database load going on with our own app 
 using:
 
DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
 
 
 with for the first model to transfer:
 
DatasetAccessor putModel(graphName, m);
 
 and for following models:
 
static void addToTransferBulk(final String graphName, final Model m) {
if (currentDataset == null)
currentDataset = DatasetFactory.createGeneral();
currentDataset.addNamedModel(graphName, m);
triplesInDataset += m.size();
if (triplesInDataset > initialLoadBulkSize) {
try {
loadDatasetMutex(currentDataset);
currentDataset = null;
triplesInDataset = 0;
} catch (TimeoutException e) {
e.printStackTrace();
return;
}
}
}
 
 as I say the exceptions appeared while I was running some queries from 
 from the fuseki browser app:
 
> [2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
> http://localhost:13180/fuseki/bdrcrw
> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
> org.apache.jena.tdb.base.file.FileException: In the middle of an 
> alloc-write
>   at 
> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>   at 
> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>   at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
>   at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
>   at 
> org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
>   at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
>   at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
>   at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
>   at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
>   at 
> org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
>   at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
>   at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
>>>

Re: Fuseki errors with concurrent requests

2018-01-29 Thread ajs6f
That might be worth trying, although since TDB1 is MRSW (multiple reader or 
single writer), that queuing of updates should be going on on the server-side.

I haven't had time to look at this issue, and it's difficult to say much 
without a reproducible phenomenon. Do you either of y'all have test code we can 
use to demonstrate this?

ajs6f

> On Jan 29, 2018, at 5:43 AM, Mikael Pesonen  
> wrote:
> 
> 
> Until better solution, quick one would be to put all operations through a 
> single queue?
> 
> Br
> 
> On 25.1.2018 4:11, Chris Tomlinson wrote:
>> Also,
>> 
>> Here's a link to the fuseki config:
>> 
>> https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl
>> 
>> Chris
>> 
>>> On Jan 24, 2018, at 17:40, Chris Tomlinson  
>>> wrote:
>>> 
>>> On the latest 3.7.0-Snapshot (master branch) I also saw repeated 
>>> occurrences of this the other day while running some queries from the 
>>> fuseki browser app and with a database load going on with our own app using:
>>> 
>>> DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
>>> 
>>> 
>>> with for the first model to transfer:
>>> 
>>> DatasetAccessor putModel(graphName, m);
>>> 
>>> and for following models:
>>> 
>>> static void addToTransferBulk(final String graphName, final Model m) {
>>> if (currentDataset == null)
>>> currentDataset = DatasetFactory.createGeneral();
>>> currentDataset.addNamedModel(graphName, m);
>>> triplesInDataset += m.size();
>>> if (triplesInDataset > initialLoadBulkSize) {
>>> try {
>>> loadDatasetMutex(currentDataset);
>>> currentDataset = null;
>>> triplesInDataset = 0;
>>> } catch (TimeoutException e) {
>>> e.printStackTrace();
>>> return;
>>> }
>>> }
>>> }
>>> 
>>> as I say the exceptions appeared while I was running some queries from from 
>>> the fuseki browser app:
>>> 
 [2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
 [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
 http://localhost:13180/fuseki/bdrcrw
 [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
 org.apache.jena.tdb.base.file.FileException: In the middle of an 
 alloc-write
at 
 org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
at 
 org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at 
 org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
 org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
 org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
 org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
at 
 org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
at 
 org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
 org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
 org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
 org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
 org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
 org.apache.jena.sparql.engine.iterator.QueryIterCon

Re: Fuseki errors with concurrent requests

2018-01-29 Thread Mikael Pesonen


Until better solution, quick one would be to put all operations through 
a single queue?


Br

On 25.1.2018 4:11, Chris Tomlinson wrote:

Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl

Chris


On Jan 24, 2018, at 17:40, Chris Tomlinson  wrote:

On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

 DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

 DatasetAccessor putModel(graphName, m);

and for following models:

 static void addToTransferBulk(final String graphName, final Model m) {
 if (currentDataset == null)
 currentDataset = DatasetFactory.createGeneral();
 currentDataset.addNamedModel(graphName, m);
 triplesInDataset += m.size();
 if (triplesInDataset > initialLoadBulkSize) {
 try {
 loadDatasetMutex(currentDataset);
 currentDataset = null;
 triplesInDataset = 0;
 } catch (TimeoutException e) {
 e.printStackTrace();
 return;
 }
 }
 }

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:


[2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
[2018-01-22 16:25:03] Fuseki INFO  [477] POST 
http://localhost:13180/fuseki/bdrcrw
[2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
at 
org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
at 
org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConcat.hasNextBinding(QueryIterConcat.java:82)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at 
org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistinct.

Re: Fuseki errors with concurrent requests

2018-01-24 Thread Chris Tomlinson
Also,

Here's a link to the fuseki config:

https://raw.githubusercontent.com/BuddhistDigitalResourceCenter/buda-base/master/conf/fuseki/bdrc-example.ttl

Chris

> On Jan 24, 2018, at 17:40, Chris Tomlinson  
> wrote:
> 
> On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences 
> of this the other day while running some queries from the fuseki browser app 
> and with a database load going on with our own app using:
> 
> DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
> 
> 
> with for the first model to transfer:
> 
> DatasetAccessor putModel(graphName, m);
> 
> and for following models:
> 
> static void addToTransferBulk(final String graphName, final Model m) {
> if (currentDataset == null)
> currentDataset = DatasetFactory.createGeneral();
> currentDataset.addNamedModel(graphName, m);
> triplesInDataset += m.size();
> if (triplesInDataset > initialLoadBulkSize) {
> try {
> loadDatasetMutex(currentDataset);
> currentDataset = null;
> triplesInDataset = 0;
> } catch (TimeoutException e) {
> e.printStackTrace();
> return;
> }
> }
> }
> 
> as I say the exceptions appeared while I was running some queries from from 
> the fuseki browser app:
> 
>> [2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
>> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
>> http://localhost:13180/fuseki/bdrcrw
>> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
>> org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
>>  at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>>  at 
>> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>>  at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
>>  at 
>> org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
>>  at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
>>  at 
>> org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
>>  at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
>>  at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
>>  at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
>>  at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
>>  at 
>> org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
>>  at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
>>  at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterConcat.hasNextBinding(QueryIterConcat.java:82)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>>  at 
>> org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistin

Re: Fuseki errors with concurrent requests

2018-01-24 Thread Chris Tomlinson
On the latest 3.7.0-Snapshot (master branch) I also saw repeated occurrences of 
this the other day while running some queries from the fuseki browser app and 
with a database load going on with our own app using:

DatasetAccessorFactory.createHTTP(baseUrl+"/data”);


with for the first model to transfer:

DatasetAccessor putModel(graphName, m);

and for following models:

static void addToTransferBulk(final String graphName, final Model m) {
if (currentDataset == null)
currentDataset = DatasetFactory.createGeneral();
currentDataset.addNamedModel(graphName, m);
triplesInDataset += m.size();
if (triplesInDataset > initialLoadBulkSize) {
try {
loadDatasetMutex(currentDataset);
currentDataset = null;
triplesInDataset = 0;
} catch (TimeoutException e) {
e.printStackTrace();
return;
}
}
}

as I say the exceptions appeared while I was running some queries from from the 
fuseki browser app:

> [2018-01-22 16:25:02] Fuseki INFO  [475] 200 OK (17.050 s)
> [2018-01-22 16:25:03] Fuseki INFO  [477] POST 
> http://localhost:13180/fuseki/bdrcrw
> [2018-01-22 16:25:03] BindingTDB ERROR get1(?lit)
> org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
>   at 
> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
>   at 
> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
>   at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
>   at 
> org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
>   at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
>   at 
> org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
>   at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:60)
>   at org.apache.jena.sparql.expr.ExprVar.eval(ExprVar.java:53)
>   at org.apache.jena.sparql.expr.ExprNode.eval(ExprNode.java:93)
>   at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:76)
>   at 
> org.apache.jena.sparql.expr.E_LogicalOr.evalSpecial(E_LogicalOr.java:58)
>   at org.apache.jena.sparql.expr.ExprFunction2.eval(ExprFunction2.java:72)
>   at org.apache.jena.sparql.expr.ExprNode.isSatisfied(ExprNode.java:41)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterFilterExpr.accept(QueryIterFilterExpr.java:49)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:69)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterConcat.hasNextBinding(QueryIterConcat.java:82)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistinct.java:104)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIterDistinct.hasNextBinding(QueryIterDistinct.java:70)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
>   at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBin

Re: Fuseki errors with concurrent requests

2018-01-24 Thread Mikael Pesonen


And running the test set with 1 concurrent loop, it was repeated 1 
times without errors. So error occurs only with more than one concurrent 
operations.


On 24.1.2018 17:40, ajs6f wrote:

Please show your code, including Fuseki config. It's hard to diagnose what is 
going wrong when we don't know what is supposed to happen.

ajs6f


On Jan 24, 2018, at 10:26 AM, Mikael Pesonen  wrote:


Hi,

I have a test script running 10 concurrent CRUD operations in a loop. After few 
operations I get HTTP error 500 and these are in fuseki server output. Am I 
doing something wrong here?

[2018-01-24 17:16:53] BindingTDB ERROR get1(?o)
org.apache.jena.tdb.base.file.FileException: 
ObjectFileStorage.read[nodes](491421708)[filesize=495059272][file.size()=495059272]:
 Failed to read the length : got 0 bytes
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:341)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
 at 
org.apache.jena.sparql.engine.binding.BindingFactory.materialize(BindingFactory.java:60)
 at 
org.apache.jena.tdb.solver.QueryEngineTDB$QueryIteratorMaterializeBinding.moveToNextBinding(QueryEngineTDB.java:131)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:156)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.moveToNextBinding(QueryIteratorWrapper.java:42)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:156)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.next(QueryIteratorBase.java:131)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.next(QueryIteratorBase.java:40)
 at org.apache.jena.atlas.iterator.Iter$2.next(Iter.java:270)
 at 
org.apache.jena.ext.com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:52)
 at 
org.apache.jena.ext.com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:50)
 at java.util.Iterator.forEachRemaining(Iterator.java:115)
 at 
org.apache.jena.sparql.engine.QueryExecutionBase.execConstructDataset(QueryExecutionBase.java:243)
 at 
org.apache.jena.sparql.engine.QueryExecutionBase.execConstructDataset(QueryExecutionBase.java:236)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.executeQuery(SPARQL_Query.java:331)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.execute(SPARQL_Query.java:270)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.executeBody(SPARQL_Query.java:239)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.perform(SPARQL_Query.java:219)
 at 
org.apache.jena.fuseki.servlets.ActionSPARQL.executeLifecycle(ActionSPARQL.java:132)
 at 
org.apache.jena.fuseki.servlets.SPARQL_UberServlet.executeRequest(SPARQL_UberServlet.java:356)
 at 
org.apache.jena.fuseki.servlets.SPARQL_UberServlet.executeAction(SPARQL_UberServlet.java:220)
 at 
org.apache.jena.fuseki.servlets.ActionSPARQL.execCommonWorker(ActionSPARQL.java:83)
 at org.apache.jena.fuseki.servlets.ActionBase.doCommon(ActionBase.java:82)
 at 
org.apache.jena.fuseki.servlets.FusekiFilter.doFilter(FusekiFilter.java:73)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
 at 
org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
 at 
org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
 at 
org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
 at 
org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
 at 
org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
 at 
org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
 at 
org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
 at 
org.apache.shiro.subject.

Re: Fuseki errors with concurrent requests

2018-01-24 Thread Mikael Pesonen


Hi,

query is posted with curl to :3030/ds

This is the error message:

SELECT DISTINCT ?s ?p ?o WHERE {
  GRAPH  {
    { ?s dcterms:isFormatOf 
 }

    UNION
    { 
 
dcterms:hasFormat ?s }

    UNION
    { VALUES ?s { 
 } . 
?s dcterms:format ?f }

    ?s ?p ?o
}}    500        Error 500: Server Error


Fuseki - version 3.6.0 (Build date: 2017-12-13T21:13:34+)


config.ttl is the default one:


@prefix :    <#> .
@prefix fuseki:   .
@prefix rdf:  .
@prefix rdfs:     .
@prefix ja:   .

[] rdf:type fuseki:Server ;



Error seems to be random, so identical queries are successfull before 
this error.




On 24.1.2018 17:40, ajs6f wrote:

Please show your code, including Fuseki config. It's hard to diagnose what is 
going wrong when we don't know what is supposed to happen.

ajs6f


On Jan 24, 2018, at 10:26 AM, Mikael Pesonen  wrote:


Hi,

I have a test script running 10 concurrent CRUD operations in a loop. After few 
operations I get HTTP error 500 and these are in fuseki server output. Am I 
doing something wrong here?

[2018-01-24 17:16:53] BindingTDB ERROR get1(?o)
org.apache.jena.tdb.base.file.FileException: 
ObjectFileStorage.read[nodes](491421708)[filesize=495059272][file.size()=495059272]:
 Failed to read the length : got 0 bytes
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:341)
 at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
 at 
org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
 at 
org.apache.jena.sparql.engine.binding.BindingFactory.materialize(BindingFactory.java:60)
 at 
org.apache.jena.tdb.solver.QueryEngineTDB$QueryIteratorMaterializeBinding.moveToNextBinding(QueryEngineTDB.java:131)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:156)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.moveToNextBinding(QueryIteratorWrapper.java:42)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:156)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.next(QueryIteratorBase.java:131)
 at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.next(QueryIteratorBase.java:40)
 at org.apache.jena.atlas.iterator.Iter$2.next(Iter.java:270)
 at 
org.apache.jena.ext.com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:52)
 at 
org.apache.jena.ext.com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:50)
 at java.util.Iterator.forEachRemaining(Iterator.java:115)
 at 
org.apache.jena.sparql.engine.QueryExecutionBase.execConstructDataset(QueryExecutionBase.java:243)
 at 
org.apache.jena.sparql.engine.QueryExecutionBase.execConstructDataset(QueryExecutionBase.java:236)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.executeQuery(SPARQL_Query.java:331)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.execute(SPARQL_Query.java:270)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.executeBody(SPARQL_Query.java:239)
 at 
org.apache.jena.fuseki.servlets.SPARQL_Query.perform(SPARQL_Query.java:219)
 at 
org.apache.jena.fuseki.servlets.ActionSPARQL.executeLifecycle(ActionSPARQL.java:132)
 at 
org.apache.jena.fuseki.servlets.SPARQL_UberServlet.executeRequest(SPARQL_UberServlet.java:356)
 at 
org.apache.jena.fuseki.servlets.SPARQL_UberServlet.executeAction(SPARQL_UberServlet.java:220)
 at 
org.apache.jena.fuseki.servlets.ActionSPARQL.execCommonWorker(ActionSPARQL.java:83)
 at org.apache.jena.fuseki.servlets.ActionBase.doCommon(ActionBase.java:82)
 at 
org

Re: Fuseki errors with concurrent requests

2018-01-24 Thread ajs6f
Please show your code, including Fuseki config. It's hard to diagnose what is 
going wrong when we don't know what is supposed to happen.

ajs6f

> On Jan 24, 2018, at 10:26 AM, Mikael Pesonen  
> wrote:
> 
> 
> Hi,
> 
> I have a test script running 10 concurrent CRUD operations in a loop. After 
> few operations I get HTTP error 500 and these are in fuseki server output. Am 
> I doing something wrong here?
> 
> [2018-01-24 17:16:53] BindingTDB ERROR get1(?o)
> org.apache.jena.tdb.base.file.FileException: 
> ObjectFileStorage.read[nodes](491421708)[filesize=495059272][file.size()=495059272]:
>  Failed to read the length : got 0 bytes
> at 
> org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:341)
> at 
> org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
> at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
> at 
> org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
> at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
> at 
> org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
> at 
> org.apache.jena.sparql.engine.binding.BindingFactory.materialize(BindingFactory.java:60)
> at 
> org.apache.jena.tdb.solver.QueryEngineTDB$QueryIteratorMaterializeBinding.moveToNextBinding(QueryEngineTDB.java:131)
> at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:156)
> at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.moveToNextBinding(QueryIteratorWrapper.java:42)
> at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:156)
> at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.next(QueryIteratorBase.java:131)
> at 
> org.apache.jena.sparql.engine.iterator.QueryIteratorBase.next(QueryIteratorBase.java:40)
> at org.apache.jena.atlas.iterator.Iter$2.next(Iter.java:270)
> at 
> org.apache.jena.ext.com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:52)
> at 
> org.apache.jena.ext.com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:50)
> at java.util.Iterator.forEachRemaining(Iterator.java:115)
> at 
> org.apache.jena.sparql.engine.QueryExecutionBase.execConstructDataset(QueryExecutionBase.java:243)
> at 
> org.apache.jena.sparql.engine.QueryExecutionBase.execConstructDataset(QueryExecutionBase.java:236)
> at 
> org.apache.jena.fuseki.servlets.SPARQL_Query.executeQuery(SPARQL_Query.java:331)
> at 
> org.apache.jena.fuseki.servlets.SPARQL_Query.execute(SPARQL_Query.java:270)
> at 
> org.apache.jena.fuseki.servlets.SPARQL_Query.executeBody(SPARQL_Query.java:239)
> at 
> org.apache.jena.fuseki.servlets.SPARQL_Query.perform(SPARQL_Query.java:219)
> at 
> org.apache.jena.fuseki.servlets.ActionSPARQL.executeLifecycle(ActionSPARQL.java:132)
> at 
> org.apache.jena.fuseki.servlets.SPARQL_UberServlet.executeRequest(SPARQL_UberServlet.java:356)
> at 
> org.apache.jena.fuseki.servlets.SPARQL_UberServlet.executeAction(SPARQL_UberServlet.java:220)
> at 
> org.apache.jena.fuseki.servlets.ActionSPARQL.execCommonWorker(ActionSPARQL.java:83)
> at org.apache.jena.fuseki.servlets.ActionBase.doCommon(ActionBase.java:82)
> at 
> org.apache.jena.fuseki.servlets.FusekiFilter.doFilter(FusekiFilter.java:73)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
> at 
> org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
> at 
> org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
> at 
> org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
> at 
> org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
> at 
> org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
> at 
> org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
> at 
> org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
> at 
> org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable