Hi,

I have a neo4j database with several million nodes and about as many 
relationships. While running a program that was adding data to it, the JVM 
seems to have crashed. When I later tried to query the database using an 
index, it opened normally and retrieved some of the nodes, but at some 
point returned the following error:

Exception in thread "main" org.neo4j.graphdb.NotFoundException: Node[20924] 
not found. This can be because someone else deleted this entity while we 
were trying to read properties from it, or because of concurrent 
modification of other properties on this entity. The problem should be 
temporary. at 
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:601) 
at 
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:579) 
at org.neo4j.kernel.impl.core.Primitive.hasProperty(Primitive.java:309) at 
org.neo4j.kernel.impl.core.NodeImpl.hasProperty(NodeImpl.java:53) at 
org.neo4j.kernel.impl.core.NodeProxy.hasProperty(NodeProxy.java:160) at 
org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.hasProperty(GDSBackedQueryContext.scala:66)
 
at 
org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.hasProperty(GDSBackedQueryContext.scala:48)
 
at org.neo4j.cypher.internal.commands.Has.isMatch(Predicate.scala:203) at 
org.neo4j.cypher.internal.pipes.FilterPipe$$anonfun$internalCreateResults$1.apply(FilterPipe.scala:30)
 
at 
org.neo4j.cypher.internal.pipes.FilterPipe$$anonfun$internalCreateResults$1.apply(FilterPipe.scala:30)
 
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390) at 
scala.collection.Iterator$class.foreach(Iterator.scala:727) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1156) at 
org.neo4j.cypher.internal.pipes.EagerAggregationPipe.internalCreateResults(EagerAggregationPipe.scala:76)
 
at 
org.neo4j.cypher.internal.pipes.PipeWithSource.createResults(Pipe.scala:69) 
at 
org.neo4j.cypher.internal.pipes.PipeWithSource.createResults(Pipe.scala:66) 
at 
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl.org$neo4j$cypher$internal$executionplan$ExecutionPlanImpl$$prepareStateAndResult(ExecutionPlanImpl.scala:164)
 
at 
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl$$anonfun$getLazyReadonlyQuery$1.apply(ExecutionPlanImpl.scala:139)
 
at 
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl$$anonfun$getLazyReadonlyQuery$1.apply(ExecutionPlanImpl.scala:138)
 
at 
org.neo4j.cypher.internal.executionplan.ExecutionPlanImpl.execute(ExecutionPlanImpl.scala:38)
 
at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:72) at 
org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:67) at 
org.neo4j.cypher.javacompat.ExecutionEngine.execute(ExecutionEngine.java:66) 
at querygraph.BasicStatsQueries.main(BasicStatsQueries.java:54) Caused by: 
org.neo4j.kernel.impl.nioneo.store.InvalidRecordException: 
PropertyRecord[11853043] not in use at 
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getRecord(PropertyStore.java:453)
 
at 
org.neo4j.kernel.impl.nioneo.store.PropertyStore.getLightRecord(PropertyStore.java:306)
 
at 
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.getPropertyRecordChain(ReadTransaction.java:185)
 
at 
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.loadProperties(ReadTransaction.java:215)
 
at 
org.neo4j.kernel.impl.nioneo.xa.ReadTransaction.nodeLoadProperties(ReadTransaction.java:239)
 
at 
org.neo4j.kernel.impl.persistence.PersistenceManager.loadNodeProperties(PersistenceManager.java:111)
 
at 
org.neo4j.kernel.impl.core.NodeManager.loadProperties(NodeManager.java:833) 
at org.neo4j.kernel.impl.core.NodeImpl.loadProperties(NodeImpl.java:143) at 
org.neo4j.kernel.impl.core.Primitive.ensureFullProperties(Primitive.java:596) 
... 23 more

There was only one thread (at least, that I started) running the query, and 
it was all reading, not writing. And though the exception claims that it's 
temporary, this happens every time I try to query this index. I therefore 
assume it has to do with the bad shutdown. I have had database corruptions 
before from forced shutdowns before I implemented code to prevent that, but 
neo4j was always able to recover the database, though it took a while. It 
seems that this is much worse.

When I looped through the index manually and added a try-catch and a 
counter, it began returning the error for every node in the index after the 
one listed above, which was about 6.6K nodes in. Does that mean that all 
these nodes are non-existent, or corrupted? That would mean a significant 
(huge) loss of data, since there should be about a million nodes in the 
index. What can I do to recover the database?

As per some advice on SO, I deleted the index and tried to reindex the 
nodes, but at the same point (about 6.6K nodes in), it crashed again, with 
the same error. This database is badly needed and it's time-sensitive - is 
there anything I can do that I haven't tried yet to recover it?

I am using 1.9.2 and would love to upgrade to use labels and etc., but I 
need this database right now for some time-critical work and don't have 
time to change anything major right now.

Thanks a lot in advance for any help.

bsg

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to