[ 
https://issues.apache.org/jira/browse/JENA-901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368478#comment-14368478
 ] 

Stian Soiland-Reyes edited comment on JENA-901 at 3/19/15 4:14 AM:
-------------------------------------------------------------------

Not sure how much to trust this sizeOf -- but a second loop even after 
data.clear() doubles it again

{code}
                for (int i=0; i<MAX*4096; i++) {
                        Node test = NodeFactory.createURI("test" + i);
                        infgraph.find(test, ty, C2).close();
                }               
                System.out.println(RamUsageEstimator.sizeOf(engine));
                data.clear();
                System.gc();
                for (int i=0; i<MAX*4096; i++) {
                        Node test = NodeFactory.createURI("test" + i);
                        infgraph.find(test, ty, C2).close();
                }               
                System.out.println(RamUsageEstimator.sizeOf(engine));
{code}

5728848
17171672



was (Author: soilandreyes):
Not sure how much to trust this sizeOf -- but a second loop even after 
data.clear() doubles it again


                for (int i=0; i<MAX*4096; i++) {
                        Node test = NodeFactory.createURI("test" + i);
                        infgraph.find(test, ty, C2).close();
                }               
                System.out.println(RamUsageEstimator.sizeOf(engine));
                data.clear();
                System.gc();
                for (int i=0; i<MAX*4096; i++) {
                        Node test = NodeFactory.createURI("test" + i);
                        infgraph.find(test, ty, C2).close();
                }               
                System.out.println(RamUsageEstimator.sizeOf(engine));

5728848
17171672


> Make the cache of LPBRuleEngine bounded to avoid out-of-memory
> --------------------------------------------------------------
>
>                 Key: JENA-901
>                 URL: https://issues.apache.org/jira/browse/JENA-901
>             Project: Apache Jena
>          Issue Type: Improvement
>          Components: Reasoners
>    Affects Versions: Jena 2.12.1
>            Reporter: Jan De Beer
>
> The class "com.hp.hpl.jena.reasoner.rulesys.impl.LPBRuleEngine" uses an 
> in-memory cache named "tabledGoals", which has no limit as to the size/number 
> of entries stored.
>     /** Table mapping tabled goals to generators for those goals.
>      *  This is here so that partial goal state can be shared across multiple 
> queries. */
>     protected HashMap<TriplePattern, Generator> tabledGoals = new HashMap<>();
> We have experienced out-of-memory issues because of the cache being filled 
> with millions of entries in just a few days under normal query usage 
> conditions and a heap memory set to 3GB.
> In our setup, we have a dataset containing multiple graphs, some of them are 
> actual data graphs (backed by TDB), and then there are two which are ontology 
> models using a "TransitiveReasoner" and an "OWLMicroFBRuleReasoner", 
> respectively. A typical query may run over all the graphs in the dataset, 
> including the ontology ones (see below for a query template). Eventhough the 
> ontology graphs would not yield any additional results for data queries 
> (which is fine), the above mentioned cache would still fill up with new 
> entries.
> SELECT ?p ?o
> WHERE {
>   GRAPH ?g {
>     <some resource of interest> ?p ?o .
>   }
> }
> As there is no upper bound to the cache, soon or later all available heap 
> memory will be consumed by the cache, giving rise to an out-of-memory 
> criticial error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to