On 25/02/13 09:05, "Dr. André Lanka" wrote:
Hello,

we encounter some problems while writing to TDB stores (0.9.4). We get
some StackOverflowExceptions now and then. Many objects of
BlockMgrJournal seems to overlay each other.
The problem seems to be solved by calling

StoreConnection.getExisting(Location).flush()

periodically.

We wonder, if there is also an internal heuristic that determines when
the journal files get transferred to the base files or if manually
flushing is always necessary?

Do you have a sample stack trace? I can't tell what going on from the description.

You get a stack of BlockMgrJournals under two conditions:

1/ Changes from multiple writers are batched together and the transactions only applied to the permanent database every few writers commits. This is not official API but you can, currently set this
  TransactionManager.QueueBatchSize
the default is 10.

(Fuseki sets it to zero for increased stability when running many datasets in one server)

I would not have thought this on it's own would cause stack overflow

2/ But if there is a read transaction about, the writer changes can't be written back (it's a write-ahead logging system).

If you have long-lived reader, then they may be blocking the writer commits finally clearing up (this is after the commit of the transaction).

> StoreConnection.getExisting(Location).flush()

Hmm - looking at that, it needs a "synchronized" (on the Transaction manager underlying operation)

        Andy


Thanks
André



Reply via email to