On 09/04/2021 14:11, jaa...@kolumbus.fi wrote:
Hi,
Could you suggest an optimal jena-fuseki heap size for my case ? I'm
sending 50 MBs file to my jena-fuseki memory-based dataset every 5 minutes.
Ultimately, for fine-tuning, the answer is "try". But a 2G or so per
dataset.
Jaana
(an
Hi,
Could you suggest an optimal jena-fuseki heap size for my case ? I'm
sending 50 MBs file to my jena-fuseki memory-based dataset every 5
minutes.
Jaana
(and should this be set to JVM actually ?)
jaa...@kolumbus.fi kirjoitti 8.4.2021 18:03:
Hello,
Still one question regarding this old
Hello,
Still one question regarding this old issue. The previous answer said:
The heap size by default is quite small in the scripts. It might be an
idea to increase it a bit to give query working space but 0.5 million
is really not very big.
What would be the suitable heap size in my case
This script looks somewhat suspect. You start a compaction (which is an
asynchronous background task) but then immediately start deleting files (which
could still be in use by a running compaction)
You really want to be polling the task status APIs to check that a compaction
has actually succe
Hello,
I've been trying TDB2 with compact. I have 2 TDB2 datasets in my
jena-fuseki. Both of them are being uploaded by 50 MBs every 5 minutes.
At the same time they are compacted hourly by the attached script.
At some point I start getting thse messages:
+ curl -i -XPOST 'localhost:8061/$