[
https://issues.apache.org/jira/browse/JENA-801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14195116#comment-14195116
]
Andy Seaborne edited comment on JENA-801 at 11/3/14 9:24 PM:
-------------------------------------------------------------
JENA-703 may contribute but that isn't likely to be a cause. A concurrent
cache on the BlockMgr may help esp for direct mode, guessing that the lock
trace is suffering from JIT inlining method calls causing the lock trace
presented as non-lockinf points.
Judging by your comment on increasing the number of current users, you are
saturating the system.
Try with {{TransactionManager.QueueBatchSize = 0}}. This does not fix the
issue but changes what you will see. To make the system "writer priority"
would need a code change; performance may well drop if short-lived writers have
too much priority.
was (Author: andy.seaborne):
JENA-703 may contribute but
Try with {{TransactionManager.QueueBatchSize = 0}} This does not fix the issue
but changes what you will see. To make the system "writer priority" would need
a code change; performance may well drop if short-lived writers have too much
priority.
> When the server is under load, many queries are piling up and seems to be in
> some kind of dead lock.
> ----------------------------------------------------------------------------------------------------
>
> Key: JENA-801
> URL: https://issues.apache.org/jira/browse/JENA-801
> Project: Apache Jena
> Issue Type: Bug
> Components: TDB
> Affects Versions: TDB 0.9.4, Jena 2.11.2
> Reporter: Bala Kolla
> Attachments:
> ThreadLocksInBlockMgrJournalAfterGuavaCacheInNodeTable.htm,
> WAITDataReportShowingTheLockContention.zip,
> WAITDataReportShowingTheLockContentionWithoutQueryFilter.zip
>
>
> We were testing our server with repositories of varied sizes and in almost
> all the cases when the server peaks its capacity (of maximum number of users
> it can support), It seems like the queries are piling up because of the lock
> contention in NodeTableCache.
> Here are some details about the repository..
> size of indices on disk - 150GB
> type of hard disk used - SSD and HDD with high RAM (seeing the same result in
> both the cases)
> OS - Linux
> Details on the user load;
> We are trying to simulate a very active user load where all the users are
> executing many usecases that would result in many queries and updates on TDB.
> I would like to know what are the possible solutions to work around and avoid
> this situation. I am thinking of the following, please let me know if there
> is any other way to work around this bottleneck.
> Control the updates to the triple store so that we only do it when there are
> not many queries pending. We would have to experiment how this impact the
> usecases..
> Is there any other way to make this lock contention go away? Can we have
> multiple instances of this cache? For example many (90%) of our queries are
> executed with a query scope (per project). So, can we have a separate
> NodeTable cache for each query scope (project in our case) and one for
> global?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)