It appears not:

$env
JAVA_MEMORY_OPTS=-Xmx32G -Xms32G

.
.
.


....................  90%
2017-03-01 23:24:55.705+0000 INFO  [o.n.c.ConsistencyCheckService] === 
Stage7_RS_Backward ===
2017-03-01 23:24:55.706+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipStore
  Reads: 3373036269
  Random Reads: 2732592348
  ScatterIndex: 81

2017-03-01 23:24:55.707+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
  10338061780 skipCheck
  1697668359 missCheck
  5621138678 checked
  10338061780 correctSkipCheck
  1688855306 skipBackup
  3951022794 overwrite
  2191262 noCacheSkip
  239346600 activeCache
  119509522 clearCache
  2429587416 relSourcePrevCheck
  995786837 relSourceNextCheck
  2058354842 relTargetPrevCheck
  137409583 relTargetNextCheck
  6917470274 forwardLinks
  7991190672 backLinks
  1052730774 nullLinks
2017-03-01 23:24:55.708+0000 INFO  [o.n.c.ConsistencyCheckService] 
Memory[used:404.70 MB, free:1.63 GB, total:2.03 GB, max:26.67 GB]
2017-03-01 23:24:55.708+0000 INFO  [o.n.c.ConsistencyCheckService] Done in 
 1h 37m 39s 828ms
.........2017-03-01 23:45:36.032+0000 INFO  [o.n.c.ConsistencyCheckService] 
=== RelationshipGroupStore-RelGrp ===
2017-03-01 23:45:36.032+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipGroupStore
  Reads: 410800979
  Random Reads: 102164662
  ScatterIndex: 24
NodeStore
  Reads: 229862945
  Random Reads: 226895703
  ScatterIndex: 98
RelationshipStore
  Reads: 423304043
  Random Reads: 139746630
  ScatterIndex: 33

2017-03-01 23:45:36.032+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
2017-03-01 23:45:36.033+0000 INFO  [o.n.c.ConsistencyCheckService] 
Memory[used:661.75 MB, free:1.39 GB, total:2.03 GB, max:26.67 GB]
2017-03-01 23:45:36.034+0000 INFO  [o.n.c.ConsistencyCheckService] Done in 
 20m 40s 326ms
.Exception in thread "ParallelRecordScanner-Stage8_PS_Props-19" 
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.util.BytesRef.<init>(BytesRef.java:73)
at 
org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.read(FSTOrdsOutputs.java:181)
at 
org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.read(FSTOrdsOutputs.java:32)
at org.apache.lucene.util.fst.Outputs.readFinalOutput(Outputs.java:77)
at org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:1094)
at org.apache.lucene.util.fst.FST.findTargetArc(FST.java:1262)
at org.apache.lucene.util.fst.FST.findTargetArc(FST.java:1186)
at 
org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:405)
at org.apache.lucene.index.TermContext.build(TermContext.java:94)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at 
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at 
org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
at 
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)


I have also tried larger memory values.

Wayne.


On Wednesday, 1 March 2017 01:52:47 UTC, Michael Hunger wrote:
>
> Sorry I just learned that neo4j-admin uses a different variable 
>
> "You can pass memory options to the JVM via the `JAVA_MEMORY_OPTS` 
> variable as a workaround though."
>
>
>
> Von meinem iPhone gesendet
>
> Am 28.02.2017 um 18:50 schrieb unrealadmin23 via Neo4j <
> ne...@googlegroups.com <javascript:>>:
>
> Michael,
>
> After running the check_consistency command for 1 day with the above 
> parameters, it failed in exactly the same manner.
>
> $env | grep -i java
> JAVA_OPTS=-Xmx32G -Xms32G
>
> Any other ideas ?
>
> Wayne
>
>
> On Monday, 27 February 2017 16:57:49 UTC, Michael Hunger wrote:
>>
>> Do you have really that much RAM in your machine ? 120G usually doesn't 
>> make sense. Most people run with 32G as large heap.
>>
>> That said. I asked and currently the numbers from the config are not 
>> used, you have to do:
>>
>> export JAVA_OPTS=-Xmx24G -Xms24G
>> neo4j-admin ...
>>
>>
>> On Mon, Feb 27, 2017 at 8:32 AM, unrealadmin23 via Neo4j <
>> ne...@googlegroups.com> wrote:
>>
>>>
>>> I should have said, that the head sizes are the ones that I have set in 
>>> neo4j.conf.
>>>
>>> Will these be used by check-consistency or do I need to supply them 
>>> elsewhere ?
>>>
>>> Wayne.
>>>
>>>
>>> On Monday, 27 February 2017 07:27:33 UTC, unreal...@googlemail.com 
>>> wrote:
>>>>
>>>> Michael,
>>>>
>>>> neo4j-admin check-consistency --database=test.db --verbose
>>>>
>>>> dbms.memory.heap.initial_size=120000m
>>>> dbms.memory.heap.max_size=120000m
>>>>
>>>> Wayne.
>>>>
>>>>
>>>>
>>>> On Monday, 27 February 2017 02:47:26 UTC, Michael Hunger wrote:
>>>>>
>>>>> How did you call the consistency checker?
>>>>>
>>>>> How much heap did you provide for it?
>>>>>
>>>>> Cheers, Michael
>>>>>
>>>>>
>>>>> On Sun, Feb 26, 2017 at 8:28 PM, unrealadmin23 via Neo4j <
>>>>> ne...@googlegroups.com> wrote:
>>>>>
>>>>>> The following o/p was obtained:
>>>>>>
>>>>>> .
>>>>>> .
>>>>>> .
>>>>>>
>>>>>> ....................  90%
>>>>>> 2017-02-26 00:03:16.883+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> === Stage7_RS_Backward ===
>>>>>> 2017-02-26 00:03:16.885+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> I/Os
>>>>>> RelationshipStore
>>>>>>   Reads: 3374851294
>>>>>>   Random Reads: 2743390177
>>>>>>   ScatterIndex: 81
>>>>>>
>>>>>> 2017-02-26 00:03:16.886+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> Counts:
>>>>>>   10338005177 skipCheck
>>>>>>   1697668360 missCheck
>>>>>>   5621138678 checked
>>>>>>   10338005177 correctSkipCheck
>>>>>>   1688855306 skipBackup
>>>>>>   3951022795 overwrite
>>>>>>   2247865 noCacheSkip
>>>>>>   239346598 activeCache
>>>>>>   119509521 clearCache
>>>>>>   2429587416 relSourcePrevCheck
>>>>>>   995786837 relSourceNextCheck
>>>>>>   2058354842 relTargetPrevCheck
>>>>>>   137409583 relTargetNextCheck
>>>>>>   6917470274 forwardLinks
>>>>>>   7991190672 backLinks
>>>>>>   1052730774 nullLinks
>>>>>> 2017-02-26 00:03:16.887+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> Memory[used:1.09 GB, free:1.07 GB, total:2.17 GB, max:26.67 GB]
>>>>>> 2017-02-26 00:03:16.887+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> Done in  1h 36m 37s 219ms
>>>>>> .........2017-02-26 00:23:26.188+0000 INFO 
>>>>>>  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp ===
>>>>>> 2017-02-26 00:23:26.189+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> I/Os
>>>>>> NodeStore
>>>>>>   Reads: 231527337
>>>>>>   Random Reads: 228593774
>>>>>>   ScatterIndex: 98
>>>>>> RelationshipStore
>>>>>>   Reads: 420334193
>>>>>>   Random Reads: 143404207
>>>>>>   ScatterIndex: 34
>>>>>> RelationshipGroupStore
>>>>>>   Reads: 409845841
>>>>>>   Random Reads: 105935972
>>>>>>   ScatterIndex: 25
>>>>>>
>>>>>> 2017-02-26 00:23:26.189+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> Counts:
>>>>>> 2017-02-26 00:23:26.190+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> Memory[used:751.21 MB, free:1.29 GB, total:2.02 GB, max:26.67 GB]
>>>>>> 2017-02-26 00:23:26.191+0000 INFO  [o.n.c.ConsistencyCheckService] 
>>>>>> Done in  20m 9s 303ms
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-11" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.getFrame(OrdsSegmentTermsEnum.java:131)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.pushFrame(OrdsSegmentTermsEnum.java:158)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:391)
>>>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
>>>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
>>>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
>>>>>> at 
>>>>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-21" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnumFrame.<init>(OrdsSegmentTermsEnumFrame.java:52)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.<init>(OrdsSegmentTermsEnum.java:84)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsFieldReader.iterator(OrdsFieldReader.java:141)
>>>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:93)
>>>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
>>>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
>>>>>> at 
>>>>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-8" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.getFrame(OrdsSegmentTermsEnum.java:128)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.pushFrame(OrdsSegmentTermsEnum.java:158)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:391)
>>>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
>>>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
>>>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
>>>>>> at 
>>>>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-46" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.newOutput(FSTOrdsOutputs.java:225)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.add(FSTOrdsOutputs.java:162)
>>>>>> at 
>>>>>> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:450)
>>>>>> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
>>>>>> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
>>>>>> at 
>>>>>> org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
>>>>>> at 
>>>>>> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
>>>>>> at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.co
>>>>>> untIndexedNodes(SimpleIndexReader.java:136)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
>>>>>> at 
>>>>>> org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
>>>>>> at 
>>>>>> org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
>>>>>> at 
>>>>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-22" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-10" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-40" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-58" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-61" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-18" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-25" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-45" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-28" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-50" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-39" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>> Exception in thread "ParallelRecordScanner-Stage8_PS_Props-51" 
>>>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>>>>
>>>>>> -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "Neo4j" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to neo4j+un...@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+un...@googlegroups.com <javascript:>.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to