Do you have really that much RAM in your machine ? 120G usually doesn't
make sense. Most people run with 32G as large heap.

That said. I asked and currently the numbers from the config are not used,
you have to do:

export JAVA_OPTS=-Xmx24G -Xms24G
neo4j-admin ...


On Mon, Feb 27, 2017 at 8:32 AM, unrealadmin23 via Neo4j <
neo4j@googlegroups.com> wrote:

>
> I should have said, that the head sizes are the ones that I have set in
> neo4j.conf.
>
> Will these be used by check-consistency or do I need to supply them
> elsewhere ?
>
> Wayne.
>
>
> On Monday, 27 February 2017 07:27:33 UTC, unreal...@googlemail.com wrote:
>>
>> Michael,
>>
>> neo4j-admin check-consistency --database=test.db --verbose
>>
>> dbms.memory.heap.initial_size=120000m
>> dbms.memory.heap.max_size=120000m
>>
>> Wayne.
>>
>>
>>
>> On Monday, 27 February 2017 02:47:26 UTC, Michael Hunger wrote:
>>>
>>> How did you call the consistency checker?
>>>
>>> How much heap did you provide for it?
>>>
>>> Cheers, Michael
>>>
>>>
>>> On Sun, Feb 26, 2017 at 8:28 PM, unrealadmin23 via Neo4j <
>>> ne...@googlegroups.com> wrote:
>>>
>>>> The following o/p was obtained:
>>>>
>>>> .
>>>> .
>>>> .
>>>>
>>>> ....................  90%
>>>> 2017-02-26 00:03:16.883+0000 INFO  [o.n.c.ConsistencyCheckService] ===
>>>> Stage7_RS_Backward ===
>>>> 2017-02-26 00:03:16.885+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
>>>> RelationshipStore
>>>>   Reads: 3374851294
>>>>   Random Reads: 2743390177
>>>>   ScatterIndex: 81
>>>>
>>>> 2017-02-26 00:03:16.886+0000 INFO  [o.n.c.ConsistencyCheckService]
>>>> Counts:
>>>>   10338005177 skipCheck
>>>>   1697668360 missCheck
>>>>   5621138678 checked
>>>>   10338005177 correctSkipCheck
>>>>   1688855306 skipBackup
>>>>   3951022795 overwrite
>>>>   2247865 noCacheSkip
>>>>   239346598 activeCache
>>>>   119509521 clearCache
>>>>   2429587416 relSourcePrevCheck
>>>>   995786837 relSourceNextCheck
>>>>   2058354842 relTargetPrevCheck
>>>>   137409583 relTargetNextCheck
>>>>   6917470274 forwardLinks
>>>>   7991190672 backLinks
>>>>   1052730774 nullLinks
>>>> 2017-02-26 00:03:16.887+0000 INFO  [o.n.c.ConsistencyCheckService]
>>>> Memory[used:1.09 GB, free:1.07 GB, total:2.17 GB, max:26.67 GB]
>>>> 2017-02-26 00:03:16.887+0000 INFO  [o.n.c.ConsistencyCheckService]
>>>> Done in  1h 36m 37s 219ms
>>>> .........2017-02-26 00:23:26.188+0000 INFO
>>>>  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp ===
>>>> 2017-02-26 00:23:26.189+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
>>>> NodeStore
>>>>   Reads: 231527337
>>>>   Random Reads: 228593774
>>>>   ScatterIndex: 98
>>>> RelationshipStore
>>>>   Reads: 420334193
>>>>   Random Reads: 143404207
>>>>   ScatterIndex: 34
>>>> RelationshipGroupStore
>>>>   Reads: 409845841
>>>>   Random Reads: 105935972
>>>>   ScatterIndex: 25
>>>>
>>>> 2017-02-26 00:23:26.189+0000 INFO  [o.n.c.ConsistencyCheckService]
>>>> Counts:
>>>> 2017-02-26 00:23:26.190+0000 INFO  [o.n.c.ConsistencyCheckService]
>>>> Memory[used:751.21 MB, free:1.29 GB, total:2.02 GB, max:26.67 GB]
>>>> 2017-02-26 00:23:26.191+0000 INFO  [o.n.c.ConsistencyCheckService]
>>>> Done in  20m 9s 303ms
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-11"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> getFrame(OrdsSegmentTermsEnum.java:131)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> pushFrame(OrdsSegmentTermsEnum.java:158)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> seekExact(OrdsSegmentTermsEnum.java:391)
>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.ConstantScoreQuery.createWeight(Con
>>>> stantScoreQuery.java:119)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>> at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQu
>>>> ery.java:239)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.IndexSearcher.createNormalizedWeigh
>>>> t(IndexSearcher.java:887)
>>>> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.
>>>> java:535)
>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:71)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:48)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.dispatch(Co
>>>> nsistencyReporter.java:124)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.forNode(Con
>>>> sistencyReporter.java:440)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>> at org.neo4j.consistency.checking.full.RecordCheckWorker.run(Re
>>>> cordCheckWorker.java:77)
>>>> at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.
>>>> Workers$Worker.run(Workers.java:137)
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-21"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnumF
>>>> rame.<init>(OrdsSegmentTermsEnumFrame.java:52)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> <init>(OrdsSegmentTermsEnum.java:84)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsFieldReader.itera
>>>> tor(OrdsFieldReader.java:141)
>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:93)
>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>> at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQu
>>>> ery.java:239)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.IndexSearcher.createNormalizedWeigh
>>>> t(IndexSearcher.java:887)
>>>> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.
>>>> java:535)
>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:71)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:48)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.dispatch(Co
>>>> nsistencyReporter.java:124)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.forNode(Con
>>>> sistencyReporter.java:440)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>> at org.neo4j.consistency.checking.full.RecordCheckWorker.run(Re
>>>> cordCheckWorker.java:77)
>>>> at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.
>>>> Workers$Worker.run(Workers.java:137)
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-8"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> getFrame(OrdsSegmentTermsEnum.java:128)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> pushFrame(OrdsSegmentTermsEnum.java:158)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> seekExact(OrdsSegmentTermsEnum.java:391)
>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.ConstantScoreQuery.createWeight(Con
>>>> stantScoreQuery.java:119)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>> at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQu
>>>> ery.java:239)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.IndexSearcher.createNormalizedWeigh
>>>> t(IndexSearcher.java:887)
>>>> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.
>>>> java:535)
>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:71)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:48)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.dispatch(Co
>>>> nsistencyReporter.java:124)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.forNode(Con
>>>> sistencyReporter.java:440)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>> at org.neo4j.consistency.checking.full.RecordCheckWorker.run(Re
>>>> cordCheckWorker.java:77)
>>>> at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.
>>>> Workers$Worker.run(Workers.java:137)
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-46"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> at org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.newOut
>>>> put(FSTOrdsOutputs.java:225)
>>>> at org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.add(
>>>> FSTOrdsOutputs.java:162)
>>>> at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.
>>>> seekExact(OrdsSegmentTermsEnum.java:450)
>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.ConstantScoreQuery.createWeight(Con
>>>> stantScoreQuery.java:119)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>> at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQu
>>>> ery.java:239)
>>>> at org.apache.lucene.search.IndexSearcher.createWeight(IndexSea
>>>> rcher.java:904)
>>>> at org.apache.lucene.search.IndexSearcher.createNormalizedWeigh
>>>> t(IndexSearcher.java:887)
>>>> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.
>>>> java:535)
>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:71)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCh
>>>> eck.check(PropertyAndNodeIndexedCheck.java:48)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.dispatch(Co
>>>> nsistencyReporter.java:124)
>>>> at org.neo4j.consistency.report.ConsistencyReporter.forNode(Con
>>>> sistencyReporter.java:440)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>> at org.neo4j.consistency.checking.full.PropertyAndNode2LabelInd
>>>> exProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>> at org.neo4j.consistency.checking.full.RecordCheckWorker.run(Re
>>>> cordCheckWorker.java:77)
>>>> at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.
>>>> Workers$Worker.run(Workers.java:137)
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-22"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-10"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-40"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-58"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-61"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>
>>>>
>>>>
>>>>
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-18"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-25"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-45"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-28"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-50"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-39"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-51"
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Neo4j" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to neo4j+un...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to