"Dr. André Lanka" wrote:
> Now and then we get query timeouts (5 seconds limit). Would this
> "irritate" the transaction management?

No.   That's fine and reasonable.


Hi Andy,

On 01.03.2013 11:03, "Dr. André Lanka" wrote:
Perhaps we misuse the API? : For writers, we use two transactions (both
read and write) within the same thread,


We reduced (but could not completely prevent) the long living reader.
Just to double-check: Is it appropriate to use two datasets (one read
and one write transaction) in parallel in the same thread?

That's fine - there are tests in the test suite for multiple transactions on the same thread in nested and unnested configurations.

Your occasional long-lived read transaction sounds more like a JVM/scheduling issue under load with java not progressing a thread for some while.

        Andy


Thanks
André



I'll keep you posted, about the reason of the problem.

Best
André



On 25.02.2013 23:16, Andy Seaborne wrote:
On 25/02/13 09:05, "Dr. André Lanka" wrote:
Hello,

we encounter some problems while writing to TDB stores (0.9.4). We get
some StackOverflowExceptions now and then. Many objects of
BlockMgrJournal seems to overlay each other.
The problem seems to be solved by calling

StoreConnection.getExisting(Location).flush()

periodically.

We wonder, if there is also an internal heuristic that determines when
the journal files get transferred to the base files or if manually
flushing is always necessary?

Do you have a sample stack trace?  I can't tell what going on from the
description.

You get a stack of BlockMgrJournals under two conditions:

1/ Changes from multiple writers are batched together and the
transactions only applied to the permanent database every few writers
commits. This is not official API but you can, currently set this
   TransactionManager.QueueBatchSize
the default is 10.

(Fuseki sets it to zero for increased stability when running many
datasets in one server)

I would not have thought this on it's own would cause stack overflow

2/ But if there is a read transaction about, the writer changes can't be
written back (it's a write-ahead logging system).

If you have long-lived reader, then they may be blocking the writer
commits finally clearing up (this is after the commit of the
transaction).

 > StoreConnection.getExisting(Location).flush()

Hmm - looking at that, it needs a "synchronized" (on the Transaction
manager underlying operation)

     Andy


Thanks
André






Reply via email to