Re: [Neo4j] java.lang.OutOfMemoryError: GC overhead limit exceeded when running 'neo4j-admin check-consistency' - Any ideas ?

2017-03-07 Thread 'Mattias Persson' via Neo4j
j-shared.sh"
>>>
>>> main() {
>>>   setup_environment
>>>   check_java
>>>   build_classpath
>>>   export NEO4J_HOME NEO4J_CONF
>>>   exec "${JAVA_CMD}" -Xmx124G -Xms124G -cp "${CLASSPATH}"
>>> -Dfile.encoding=UTF-8 "org.neo4j.commandline.admin.AdminTool" "$@"
>>> }
>>>
>>> main "$@"
>>>
>>> I'll let you know in 24 hours.
>>>
>>> Wayne
>>>
>>> On Saturday, 4 March 2017 00:03:26 UTC, Michael Hunger wrote:
>>>>
>>>> Can you try to edit the script directly and add the memory parameters
>>>> there?
>>>>
>>>> On Fri, Mar 3, 2017 at 8:49 PM, unrealadmin23 via Neo4j <
>>>> ne...@googlegroups.com> wrote:
>>>>
>>>>> Yes
>>>>>
>>>>> Also, in the 90% scan, what every Java memory parameter I use,   htop
>>>>> shows the same memory foot print.   Its as if the heap isn't being set as
>>>>> per the env parameters that you area asking me to set.
>>>>>
>>>>> Wayne
>>>>>
>>>>>
>>>>> On Friday, 3 March 2017 07:51:48 UTC, Mattias Persson wrote:
>>>>>>
>>>>>> Querying Lucene, at the very least the way consistency checker uses
>>>>>> it, has a drawback that all matching documents will be read and kept in
>>>>>> heap before going through them.
>>>>>>
>>>>>> So let me ask you something about your data: are there certain
>>>>>> property values that are very common and also indexed?
>>>>>>
>>>>>> On Thursday, March 2, 2017 at 7:07:31 PM UTC+1,
>>>>>> unreal...@googlemail.com wrote:
>>>>>>>
>>>>>>> It appears not:
>>>>>>>
>>>>>>> $env
>>>>>>> JAVA_MEMORY_OPTS=-Xmx32G -Xms32G
>>>>>>>
>>>>>>> .
>>>>>>> .
>>>>>>> .
>>>>>>>
>>>>>>>
>>>>>>>   90%
>>>>>>> 2017-03-01 23:24:55.705+ INFO  [o.n.c.ConsistencyCheckService]
>>>>>>> === Stage7_RS_Backward ===
>>>>>>> 2017-03-01 23:24:55.706+ INFO  [o.n.c.ConsistencyCheckService]
>>>>>>> I/Os
>>>>>>> RelationshipStore
>>>>>>>   Reads: 3373036269
>>>>>>>   Random Reads: 2732592348
>>>>>>>   ScatterIndex: 81
>>>>>>>
>>>>>>> 2017-03-01 23:24:55.707+ INFO  [o.n.c.ConsistencyCheckService]
>>>>>>> Counts:
>>>>>>>   10338061780 skipCheck
>>>>>>>   1697668359 missCheck
>>>>>>>   5621138678 checked
>>>>>>>   10338061780 correctSkipCheck
>>>>>>>   1688855306 skipBackup
>>>>>>>   3951022794 overwrite
>>>>>>>   2191262 noCacheSkip
>>>>>>>   239346600 activeCache
>>>>>>>   119509522 clearCache
>>>>>>>   2429587416 relSourcePrevCheck
>>>>>>>   995786837 relSourceNextCheck
>>>>>>>   2058354842 relTargetPrevCheck
>>>>>>>   137409583 relTargetNextCheck
>>>>>>>   6917470274 forwardLinks
>>>>>>>   7991190672 backLinks
>>>>>>>   1052730774 nullLinks
>>>>>>> 2017-03-01 23:24:55.708+ INFO  [o.n.c.ConsistencyCheckService]
>>>>>>> Memory[used:404.70 MB, free:1.63 GB, total:2.03 GB, max:26.67 GB]
>>>>>>> 2017-03-01 23:24:55.708+ INFO  [o.n.c.ConsistencyCheckService]
>>>>>>> Done in  1h 37m 39s 828ms
>>>>>>> .2017-03-01 23:45:36.032+ INFO
>>>>>>>  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp
>>>>>>> ===
>>>>>>> 2017-03-01 23:45:36.032+ INFO  [o.n.c.ConsistencyCheckService]
>>>>>>> I/Os
>>>>>>> RelationshipGroupStore
>>>>>>>   Reads: 410800979
>>>>>>>   Random Reads: 102164662
>>>>>>>   ScatterIndex: 24
>>>>>>> NodeStore
>>>>>>>   Reads: 229862945
>>>>>>>   Random Reads: 22689570

Re: [Neo4j] java.lang.OutOfMemoryError: GC overhead limit exceeded when running 'neo4j-admin check-consistency' - Any ideas ?

2017-03-02 Thread &#x27;Mattias Persson' via Neo4j
Querying Lucene, at the very least the way consistency checker uses it, has 
a drawback that all matching documents will be read and kept in heap before 
going through them.

So let me ask you something about your data: are there certain property 
values that are very common and also indexed?

On Thursday, March 2, 2017 at 7:07:31 PM UTC+1, unreal...@googlemail.com 
wrote:
>
> It appears not:
>
> $env
> JAVA_MEMORY_OPTS=-Xmx32G -Xms32G
>
> .
> .
> .
>
>
>   90%
> 2017-03-01 23:24:55.705+ INFO  [o.n.c.ConsistencyCheckService] === 
> Stage7_RS_Backward ===
> 2017-03-01 23:24:55.706+ INFO  [o.n.c.ConsistencyCheckService] I/Os
> RelationshipStore
>   Reads: 3373036269
>   Random Reads: 2732592348
>   ScatterIndex: 81
>
> 2017-03-01 23:24:55.707+ INFO  [o.n.c.ConsistencyCheckService] Counts:
>   10338061780 skipCheck
>   1697668359 missCheck
>   5621138678 checked
>   10338061780 correctSkipCheck
>   1688855306 skipBackup
>   3951022794 overwrite
>   2191262 noCacheSkip
>   239346600 activeCache
>   119509522 clearCache
>   2429587416 relSourcePrevCheck
>   995786837 relSourceNextCheck
>   2058354842 relTargetPrevCheck
>   137409583 relTargetNextCheck
>   6917470274 forwardLinks
>   7991190672 backLinks
>   1052730774 nullLinks
> 2017-03-01 23:24:55.708+ INFO  [o.n.c.ConsistencyCheckService] 
> Memory[used:404.70 MB, free:1.63 GB, total:2.03 GB, max:26.67 GB]
> 2017-03-01 23:24:55.708+ INFO  [o.n.c.ConsistencyCheckService] Done in 
>  1h 37m 39s 828ms
> .2017-03-01 23:45:36.032+ INFO 
>  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp ===
> 2017-03-01 23:45:36.032+ INFO  [o.n.c.ConsistencyCheckService] I/Os
> RelationshipGroupStore
>   Reads: 410800979
>   Random Reads: 102164662
>   ScatterIndex: 24
> NodeStore
>   Reads: 229862945
>   Random Reads: 226895703
>   ScatterIndex: 98
> RelationshipStore
>   Reads: 423304043
>   Random Reads: 139746630
>   ScatterIndex: 33
>
> 2017-03-01 23:45:36.032+ INFO  [o.n.c.ConsistencyCheckService] Counts:
> 2017-03-01 23:45:36.033+ INFO  [o.n.c.ConsistencyCheckService] 
> Memory[used:661.75 MB, free:1.39 GB, total:2.03 GB, max:26.67 GB]
> 2017-03-01 23:45:36.034+ INFO  [o.n.c.ConsistencyCheckService] Done in 
>  20m 40s 326ms
> .Exception in thread "ParallelRecordScanner-Stage8_PS_Props-19" 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at org.apache.lucene.util.BytesRef.(BytesRef.java:73)
> at 
> org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.read(FSTOrdsOutputs.java:181)
> at 
> org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.read(FSTOrdsOutputs.java:32)
> at org.apache.lucene.util.fst.Outputs.readFinalOutput(Outputs.java:77)
> at org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:1094)
> at org.apache.lucene.util.fst.FST.findTargetArc(FST.java:1262)
> at org.apache.lucene.util.fst.FST.findTargetArc(FST.java:1186)
> at 
> org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:405)
> at org.apache.lucene.index.TermContext.build(TermContext.java:94)
> at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
> at 
> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
> at 
> org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
> at 
> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
> at org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:57)
>
>
> I have also tried larger memory values.
>
> Wayne.
>
>
> On Wednesday, 1 March 2017 01:52:47 UTC, Michael Hunger wrote:
>>
>> Sorry I just learned that neo4j-admin uses a different variable 
>>
>> "You can pass memory options to the JVM via the `JAVA_MEMORY_OPTS` 
>> variable as a workaround though."
>>
>>
>>
>> Von meinem iPhone gesendet
>>
>> Am 28.02.2017 um 18:50 schrieb unrealadmin23 via Neo4j <
>> ne...@googlegroups.com>:
>>
>> Michael,
>>
>> After running the check_consistency command for 1 day with the above 
>> parameters, it failed in exactly the same manner.
>>
>> $env | grep -i java
>> JAVA_OPTS=-Xmx32G -Xms32G
>>
>> Any other ideas ?
>>
>> Wayne
>>
>>
>> On Monday, 27 February 2017 16:57:49 UTC, Michael Hunger wrote:
>>>
>>> Do you have really that much RAM in your machine ? 120G usually doesn't 
>>> make sense. Most people run with 32G as large heap.
>>>
>>> That said. I asked and currently the numbers from the config are not 
>>> used, you have to do:
>>>
>>> export JAVA_OPTS=-Xmx24G -Xms24G
>>> neo4j-admin ...
>>>
>>>
>>> On Mon, Feb 27, 2017 at 8:32 AM, unrealadmin23 via Neo4j <
>>> ne...@googlegroups.com> wrote:
>>>

 I should have said, that the head sizes are the ones that I have set in 
 neo4j.conf.

 Will these be used by check-consistency or do I need to supply them 
 elsewhere ?

 Wayne.


 On Monday, 27 February 2017 07:27:33 UTC, unreal...@googlemail.com 
 wrote:
>
> Michael,
>
> neo4

[Neo4j] Re: Traversal Performance

2017-01-22 Thread &#x27;Mattias Persson' via Neo4j
Max, `.nodes()` does this, gets the `endNode()` of all paths.

Felix, that sounds awfully long, have you run some profiling on this?

On Friday, January 20, 2017 at 7:28:35 PM UTC+1, Felix Dietze wrote:
>
> Hi,
>
> I'd like to know if its possible to do a faster traversal than in this 
> stored procedure.
>
> I want to return all reachable (directed) nodes from a startnode:
>
> public class ConnectedComponent {
> @Context
> public GraphDatabaseService graphDatabaseService;
>
>
> private static enum RelTypes implements RelationshipType {
> CONNECTS
> }
>
>
> @Procedure(value = "connectedComponent")
> public Stream connectedComponent(@Name("start") Node start) {
> return graphDatabaseService.traversalDescription()
>.depthFirst()
>.relationships( RelTypes.CONNECTS, Direction.OUTGOING )
>.uniqueness( Uniqueness.RELATIONSHIP_GLOBAL )
>.traverse( start )
>.nodes().stream().map(Visited::new);
> }
>
>
> public static class Visited {
> public final Node node;
> public Visited(Node node) {
> this.node = node;
> }
> }
> }
>
>
> I call the procedure like this:
> match (n:Post {id: "3235"}) CALL connectedComponent(n) yield node return 
> node;
>
>
> This takes around 3 seconds for 15k visited nodes. That's about 10x slower 
> compared to a stored procedure in postgres on the same data. Any ideas?
>
>
>
> Thank you!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Way to create Neo4j indexes safely?

2017-01-12 Thread &#x27;Mattias Persson' via Neo4j
It's strange that it's holding a schema lock, it shouldn't

On Thu, Jan 12, 2017 at 3:10 PM, Matias Burak  wrote:

> Just an index.
>
> El 12 ene. 2017, a las 11:08, Mattias Persson 
> escribió:
>
> OK cool, and are you creating constraint or just an index?
>
> On Thu, Jan 12, 2017 at 2:58 PM, Matias Burak  wrote:
>
>> Hi Mattias, it looks like it, this is the kind of exceptions we are
>> getting:
>>
>> org.springframework.transaction.HeuristicCompletionException: Heuristic
>> completion: outcome state is rolled back; nested exception is
>> org.neo4j.driver.v1.exceptions.TransientException: LockClient[7068] can't
>> wait on resource RWLock[SCHEMA(0), hash=192551521] since =>
>> LockClient[7068] <-[:HELD_BY]- RWLock[SCHEMA(0), hash=192551521]
>> <-[:WAITING_FOR]- LockClient[5626] <-[:HELD_BY]- RWLock[SCHEMA(0),
>> hash=192551521]
>>
>> El jueves, 12 de enero de 2017, 3:39:54 (UTC-3), Mattias Persson escribió:
>>>
>>> CREATE INDEX shouldn't keep a schema lock during the duration of
>>> population of the index. Is that what you're seeing here?
>>>
>>> Perhaps are you creating constraints?
>>>
>>> On Wednesday, January 11, 2017 at 9:07:42 PM UTC+1, Matias Burak wrote:
>>>>
>>>> Hi Michael, we might have a lot of operations running at the same time,
>>>> like CREATE, MERGE and some might write thousands of records in a single
>>>> transaction. There can be several concurrent of these but i guess not more
>>>> than 5...10 at most. And yes, they might be creating/updating nodes for
>>>> that label.
>>>>
>>>> El miércoles, 11 de enero de 2017, 15:56:57 (UTC-3), Michael Hunger
>>>> escribió:
>>>>>
>>>>> Matias,
>>>>>
>>>>> can you describe the other kinds of queries that are running (reads,
>>>>> writes, do they also touch the :User label?) how many of them and how
>>>>> concurrent?
>>>>>
>>>>> Michael
>>>>>
>>>>> On Tue, Jan 10, 2017 at 11:32 PM, Matias Burak 
>>>>> wrote:
>>>>>
>>>>>> Hi guys,
>>>>>>
>>>>>> Is there a way to create indexes safely while running an application?
>>>>>> We need to create indexes on a remote Neo4j server dinamically while
>>>>>> the system is running, so it might be doing other calls to Neo4j server.
>>>>>> Right now we create them by running a query like "CREATE INDEX ON
>>>>>> :User(name)" but that is looking the whole database, and sometimes we are
>>>>>> getting deadlocks and eventually the server stops responding.
>>>>>>
>>>>>> Is there something we can do to avoid this behavior?
>>>>>> We are running the latest 3.1 version.
>>>>>>
>>>>>> Thanks,
>>>>>> Matias.
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Neo4j" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to neo4j+un...@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>>
>
>
> --
> Mattias Persson
> Neo4j Hacker at Neo Technology
>
>
>


-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Way to create Neo4j indexes safely?

2017-01-12 Thread &#x27;Mattias Persson' via Neo4j
OK cool, and are you creating constraint or just an index?

On Thu, Jan 12, 2017 at 2:58 PM, Matias Burak  wrote:

> Hi Mattias, it looks like it, this is the kind of exceptions we are
> getting:
>
> org.springframework.transaction.HeuristicCompletionException: Heuristic
> completion: outcome state is rolled back; nested exception is
> org.neo4j.driver.v1.exceptions.TransientException: LockClient[7068] can't
> wait on resource RWLock[SCHEMA(0), hash=192551521] since =>
> LockClient[7068] <-[:HELD_BY]- RWLock[SCHEMA(0), hash=192551521]
> <-[:WAITING_FOR]- LockClient[5626] <-[:HELD_BY]- RWLock[SCHEMA(0),
> hash=192551521]
>
> El jueves, 12 de enero de 2017, 3:39:54 (UTC-3), Mattias Persson escribió:
>>
>> CREATE INDEX shouldn't keep a schema lock during the duration of
>> population of the index. Is that what you're seeing here?
>>
>> Perhaps are you creating constraints?
>>
>> On Wednesday, January 11, 2017 at 9:07:42 PM UTC+1, Matias Burak wrote:
>>>
>>> Hi Michael, we might have a lot of operations running at the same time,
>>> like CREATE, MERGE and some might write thousands of records in a single
>>> transaction. There can be several concurrent of these but i guess not more
>>> than 5...10 at most. And yes, they might be creating/updating nodes for
>>> that label.
>>>
>>> El miércoles, 11 de enero de 2017, 15:56:57 (UTC-3), Michael Hunger
>>> escribió:
>>>>
>>>> Matias,
>>>>
>>>> can you describe the other kinds of queries that are running (reads,
>>>> writes, do they also touch the :User label?) how many of them and how
>>>> concurrent?
>>>>
>>>> Michael
>>>>
>>>> On Tue, Jan 10, 2017 at 11:32 PM, Matias Burak 
>>>> wrote:
>>>>
>>>>> Hi guys,
>>>>>
>>>>> Is there a way to create indexes safely while running an application?
>>>>> We need to create indexes on a remote Neo4j server dinamically while
>>>>> the system is running, so it might be doing other calls to Neo4j server.
>>>>> Right now we create them by running a query like "CREATE INDEX ON
>>>>> :User(name)" but that is looking the whole database, and sometimes we are
>>>>> getting deadlocks and eventually the server stops responding.
>>>>>
>>>>> Is there something we can do to avoid this behavior?
>>>>> We are running the latest 3.1 version.
>>>>>
>>>>> Thanks,
>>>>> Matias.
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Neo4j" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to neo4j+un...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>


-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Way to create Neo4j indexes safely?

2017-01-11 Thread &#x27;Mattias Persson' via Neo4j
CREATE INDEX shouldn't keep a schema lock during the duration of population 
of the index. Is that what you're seeing here?

Perhaps are you creating constraints?

On Wednesday, January 11, 2017 at 9:07:42 PM UTC+1, Matias Burak wrote:
>
> Hi Michael, we might have a lot of operations running at the same time, 
> like CREATE, MERGE and some might write thousands of records in a single 
> transaction. There can be several concurrent of these but i guess not more 
> than 5...10 at most. And yes, they might be creating/updating nodes for 
> that label.
>
> El miércoles, 11 de enero de 2017, 15:56:57 (UTC-3), Michael Hunger 
> escribió:
>>
>> Matias,
>>
>> can you describe the other kinds of queries that are running (reads, 
>> writes, do they also touch the :User label?) how many of them and how 
>> concurrent?
>>
>> Michael
>>
>> On Tue, Jan 10, 2017 at 11:32 PM, Matias Burak  wrote:
>>
>>> Hi guys,
>>>
>>> Is there a way to create indexes safely while running an application? 
>>> We need to create indexes on a remote Neo4j server dinamically while the 
>>> system is running, so it might be doing other calls to Neo4j server.
>>> Right now we create them by running a query like "CREATE INDEX ON 
>>> :User(name)" but that is looking the whole database, and sometimes we are 
>>> getting deadlocks and eventually the server stops responding.
>>>
>>> Is there something we can do to avoid this behavior?
>>> We are running the latest 3.1 version.
>>>
>>> Thanks,
>>> Matias.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Traversal path questions

2017-01-08 Thread &#x27;Mattias Persson' via Neo4j
1. Have you tried RELATIONSHIP_PATH uniqueness?

2. Are you basing this on type only? You could use 
https://github.com/neo4j/neo4j/blob/3.1/community/graphdb-api/src/main/java/org/neo4j/graphdb/impl/OrderedByTypeExpander.java
 
perhaps?

On Friday, January 6, 2017 at 11:35:38 PM UTC+1, Furkan Göz wrote:
>
> Dear Friends, 
>
> I m working neo4j in java. I have some paths and i m trying to find some 
> relations. I have two problems according to below example. If it is 
> possible, I will be waiting your answers.
>
> for (Path path : 
> graphDb.traversalDescription().depthFirst().uniqueness(Uniqueness.NODE_PATH)
> .evaluator(Evaluators.atDepth(3)) 
> .relationships(relation.A), Direction.BOTH)
> .relationships(relation.B), Direction.BOTH)
> .relationships(relation.C, Direction.BOTH)
> .traverse(node)) {
>
> }
>
> 1. I want to use each relation just one time.   
>
> For example:   (1)--[A]-->(2)--[A]-->(3)--[B]-->(4)it is not available 
> for me - A is used two times
>  
> (1)--[C]-->(2)--[A]-->(3)--[B]-->(4)   it is 
> available - each relation is unique
>
>
> 2. I want to use relations in order or priority.
>
> For example: (1)--[B]-->(2)--[A]-->(3)--[C]-->(4)it is not 
> available for me because in my path  B is first relationship 
>  
> (1)--[A]-->(2)--[B]-->(3)--[C]-->(4)   it is 
> available - each relation is in an order
>
>
> Note: I can solve some other java codes about these, but running time is 
> slowing. 
>
>
> Best regards.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Neo4j: Ver 3.0.4 delete and re-insert label corruption

2016-09-19 Thread &#x27;Mattias Persson' via Neo4j
Failures to populate index should put the actual error in 
/schema/index/lucene//failure-message or similar. Can you see 
what's inside that file?

On Friday, September 2, 2016 at 5:31:24 PM UTC+2, Kevin McGinn wrote:
>
> I deleted the index and attempted to restart. After an hour, I received an 
> out of memory error. I changed the heap size from 4G to 8G and retried. 
> After 45 min. I received the error:
>
> Caused by: java.lang.IllegalStateException: Index entered FAILED state 
> while recovery waited for it to be fully populated
> at 
> org.neo4j.kernel.impl.api.index.IndexingService.awaitOnline(IndexingService.java:339)
> at 
> org.neo4j.kernel.impl.api.index.IndexingService.start(IndexingService.java:319)
>
> I am not sure if there are any options other than to start from scratch 
> which is high undesirable.
>
>
> Kevin P. McGinn, PMP
>
> Absolute Performance Inc.
> 12303 Airport Way, Suite 100
> Broomfield, CO 80021
>
> cell: (303) 305-9844
> kmc...@absolute-performance.com 
>
> [image: API Logo]
> NON-DISCLOSURE NOTICE:  This communication including any and all 
> attachments is for the intended recipient(s) only and may contain 
> confidential and privileged information.  If you are not the intended 
> recipient of this communication, any disclosure, copying further 
> distribution or use of this communication is prohibited.  If you received 
> this communication in error, please contact the sender and delete/destroy 
> all copies of this communication immediately.
>
> On Fri, Sep 2, 2016 at 6:33 AM, 'Michael Hunger' via Neo4j <
> ne...@googlegroups.com > wrote:
>
>> That sounds really unusual.
>>
>> Can you stop the server and delete 
>> $NEO4J_HOME/data/databases/graph.db/schema/label
>>
>> and restart, then the label index should be rebuilt.
>>
>> also please share the result of the "schema" command.
>>
>> Michael
>>
>> Am 02.09.2016 um 14:28 schrieb Kevin McGinn <
>> kmc...@absolute-performance.com >:
>>
>> The count of 29 is via the command:
>>   match(n:clients) return count(*);
>>
>> The query
>>   match(n:clients) return n;
>>
>> returns an empty set.
>>
>> ClientsID had a unique constraint defined on. I dropped it with the 
>> intent of re-creating in hopes it would help correct the problem.
>>
>> Currently, I can not restart neo4j. I shut down neo4j and removed the 
>> indexes to force an index rebuild. At restart attempt, the neo4j.log file 
>> contains the error:
>>
>> Caused by: java.lang.IllegalStateException: Index entered FAILED state 
>> while recovery waited for it to be fully populated
>>
>> If I can figure out what this means and fix this error, I will regen the 
>> clients data error and get more information to you. As an aside, I have 66G 
>> of data; 13G of indexes but have only 36G of ram.
>>
>>
>> Kevin P. McGinn, PMP
>>
>> Absolute Performance Inc.
>> 12303 Airport Way, Suite 100
>> Broomfield, CO 80021
>>
>> cell: (303) 305-9844
>> kmc...@absolute-performance.com 
>>
>> [image: API Logo]
>> NON-DISCLOSURE NOTICE:  This communication including any and all 
>> attachments is for the intended recipient(s) only and may contain 
>> confidential and privileged information.  If you are not the intended 
>> recipient of this communication, any disclosure, copying further 
>> distribution or use of this communication is prohibited.  If you received 
>> this communication in error, please contact the sender and delete/destroy 
>> all copies of this communication immediately.
>>
>> On Fri, Sep 2, 2016 at 3:44 AM, 'Michael Hunger' via Neo4j <
>> ne...@googlegroups.com > wrote:
>>
>>> This sounds very unsual.
>>>
>>> Where does the count show 29 ?
>>>
>>> Do you have a constraint on clients(ClientsID) ?
>>> please note that both labels and properties are case-sensitive
>>> Are you sure that your row.ClientsID is unique ?
>>>
>>> Can you share the full error, it misses the second part that explains 
>>> the duplicate.
>>>
>>> I presume the duplicate is on another constrained field.
>>>
>>> Am 01.09.2016 um 22:05 schrieb kmc...@absolute-performance.com 
>>> :
>>>
>>> I deleted the nodes from a label:
>>>   match(n:clients) delete n;
>>>
>>> When I attempted to re-load from CSV:
>>>
>>> using periodic commit 1
>>> LOAD CSV WITH HEADERS FROM 
>>> 'file:/export/warehouse/tmp/clients_08-31-2016_02-03.txt' as row 
>>> FIELDTERMINATOR '\t' with row as row where row.ClientsID is not null
>>> merge (x:clients {ClientsID:row.ClientsID})
>>> on create set x+=row on match set x=row
>>> return count(*);
>>>
>>>
>>> I get the error:
>>>
>>> 104 ms
>>>
>>> WARNING: Node with id 122046120
>>>
>>>
>>> I take it this is a node re-use error.
>>>
>>> But the label is now corrupted. There are no node properties when I 
>>> attempt to access the nodes.
>>>
>>> Trying to delete all nodes:
>>>  match (n:clients) delete n;
>>>
>>> produces the result:
>>>
>>> ++
>>> | No data returned, and nothing was changed. |
>>> +---

[Neo4j] Re: Why does latency from neo4j increases when Database size is in large ?

2016-08-25 Thread &#x27;Mattias Persson' via Neo4j
Perhaps GC? Sounds like far too many threads and things could potentially 
go a lot faster with fewer threads, something like number of cores on the 
server or some small multiple of that.

On Monday, August 22, 2016 at 5:05:42 PM UTC+2, gitansh...@freecharge.com 
wrote:
>
> My database has around 15 million nodes. 
> App server is hitting the database with 3000 parallel threads, earlier the 
> latency from database was in milliseconds, but as increase in DB size, 
> latencies are reaching 10 seconds. 
> What can be the possible reason for that?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: neo4j-import

2016-05-01 Thread &#x27;Mattias Persson' via Neo4j
neo4j-import can currently only add new nodes/relationships and add 
properties to those new nodes/relationships. It cannot currently update 
existing entities.

On Monday, April 25, 2016 at 1:20:23 PM UTC+2, kincheong lau wrote:
>
> I have a single csv file with just 3 columns and no header
> 1. Category
> 2. Reader
> 3. Time stamp
>
> I would like to use neo4j-import from shell to set the relationship only 
> for matching reader and category, is that possible with neo4j-import?
>
> If using cypher, the query would be :
>
> match (r:reader)-(t.subscribed)->(c.category)
> where r.reader = csv.reader and c.category = csv.category
> set r.timestamp = csv.timestamp
>
> But the csv is over 1gb and all neo4j-import examples are to import 
> everything
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Importing error: value too big to be represented as java.lang.Integer

2016-03-30 Thread Mattias Persson
https://github.com/neo4j/neo4j/pull/6180 was the fix for this, made by
yours truly :)

I'm more than fairly confident that's the issue you're running into.

On Wed, Mar 30, 2016 at 6:43 PM, Zhixuan Wang 
wrote:

> Thank you for your reply.
>
>
> I am using 2.2.6, unfortunately it is not that easy for me to install a
> new software on my server.
>
> So before reinstall the neo4j, can you help me understand what bug is
> that? Is there any diagnosis that I can do to confirm it is truly because
> of this bug?
>
>
>
> On Wednesday, March 30, 2016 at 2:26:29 AM UTC-7, Mattias Persson wrote:
>>
>> Which Neo4j versions is this? This has been fixed and should be working
>> fine in 2.3.3
>>
>> On Wednesday, March 30, 2016 at 2:43:23 AM UTC+2, Michael Hunger wrote:
>>>
>>> What is your header definition and what your command-line call?
>>>
>>> michael
>>>
>>> Am 30.03.2016 um 01:39 schrieb Zhixuan Wang :
>>>
>>> I was trying to import a database with about 2-3 billion nodes.
>>> According to their documentation it should allow for as many as 35
>>> billion nodes.
>>>
>>> However, I still got this error message duing neo4j-import
>>>
>>> *Prepare node index*
>>> *^M[*:21.14
>>> GB]
>>>   0^M[*:21.14
>>> GB]
>>>   0^M[*:21.14
>>> GB]
>>>   0^M[*:21.14
>>> GB]
>>>   0Exception in thread "Thread-737" java.lang.ArithmeticException: Value
>>> 4386536741 <4386536741> is too big to be represented as java.lang.Integer*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.Utils.safeCastLongToInt(Utils.java:36)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.sortRadix(ParallelSort.java:142)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.run(ParallelSort.java:68)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.EncodingIdMapper.prepare(EncodingIdMapper.java:270)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.IdMapperPreparationStep.process(IdMapperPreparationStep.java:54)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.staging.LonelyProcessingStep$1.run(LonelyProcessingStep.java:56)*
>>> *Import error: Value 4386536741 <4386536741> is too big to be
>>> represented as java.lang.Integer*
>>> *java.lang.ArithmeticException: Value 4386536741 <4386536741> is too big
>>> to be represented as java.lang.Integer*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.Utils.safeCastLongToInt(Utils.java:36)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.sortRadix(ParallelSort.java:142)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.run(ParallelSort.java:68)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.EncodingIdMapper.prepare(EncodingIdMapper.java:270)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.IdMapperPreparationStep.process(IdMapperPreparationStep.java:54)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.staging.LonelyProcessingStep$1.run(LonelyProcessingStep.java:56)*
>>>
>>> *Exception in thread "TrackerInitializer-26"
>>> java.lang.ArrayIndexOutOfBoundsException: Requested index -1915368251, but
>>> length is 2837264380 <2837264380>*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.OffHeapNumberArray.addressOf(OffHeapNumberArray.java:54)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.OffHeapIntArray.set(OffHeapIntArray.java:48)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort$TrackerInitializer.run(ParallelSort.java:411)*
>>> *at
>>> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:123)*
>>>
>>> It looks to me that something went wrong while neo4j was trying to
>>> creating the index. here comes my questions:
>>>
>>> 1. I assume the problem is that the index type is set to
>>>

Re: [Neo4j] Importing error: value too big to be represented as java.lang.Integer

2016-03-30 Thread Mattias Persson
Which Neo4j versions is this? This has been fixed and should be working 
fine in 2.3.3

On Wednesday, March 30, 2016 at 2:43:23 AM UTC+2, Michael Hunger wrote:
>
> What is your header definition and what your command-line call?
>
> michael
>
> Am 30.03.2016 um 01:39 schrieb Zhixuan Wang  >:
>
> I was trying to import a database with about 2-3 billion nodes.
> According to their documentation it should allow for as many as 35 billion 
> nodes.
>
> However, I still got this error message duing neo4j-import
>
> *Prepare node index*
> *^M[*:21.14 
> GB]
>  
>   0^M[*:21.14 
> GB]
>  
>   0^M[*:21.14 
> GB]
>  
>   0^M[*:21.14 
> GB]
>  
>   0Exception in thread "Thread-737" java.lang.ArithmeticException: Value 
> 4386536741 is too big to be represented as java.lang.Integer*
> *at 
> org.neo4j.unsafe.impl.batchimport.Utils.safeCastLongToInt(Utils.java:36)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.sortRadix(ParallelSort.java:142)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.run(ParallelSort.java:68)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.EncodingIdMapper.prepare(EncodingIdMapper.java:270)*
> *at 
> org.neo4j.unsafe.impl.batchimport.IdMapperPreparationStep.process(IdMapperPreparationStep.java:54)*
> *at 
> org.neo4j.unsafe.impl.batchimport.staging.LonelyProcessingStep$1.run(LonelyProcessingStep.java:56)*
> *Import error: Value 4386536741 is too big to be represented as 
> java.lang.Integer*
> *java.lang.ArithmeticException: Value 4386536741 is too big to be 
> represented as java.lang.Integer*
> *at 
> org.neo4j.unsafe.impl.batchimport.Utils.safeCastLongToInt(Utils.java:36)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.sortRadix(ParallelSort.java:142)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort.run(ParallelSort.java:68)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.EncodingIdMapper.prepare(EncodingIdMapper.java:270)*
> *at 
> org.neo4j.unsafe.impl.batchimport.IdMapperPreparationStep.process(IdMapperPreparationStep.java:54)*
> *at 
> org.neo4j.unsafe.impl.batchimport.staging.LonelyProcessingStep$1.run(LonelyProcessingStep.java:56)*
>
> *Exception in thread "TrackerInitializer-26" 
> java.lang.ArrayIndexOutOfBoundsException: Requested index -1915368251, but 
> length is 2837264380*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.OffHeapNumberArray.addressOf(OffHeapNumberArray.java:54)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.OffHeapIntArray.set(OffHeapIntArray.java:48)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.ParallelSort$TrackerInitializer.run(ParallelSort.java:411)*
> *at 
> org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:123)*
>
> It looks to me that something went wrong while neo4j was trying to 
> creating the index. here comes my questions:
>
> 1. I assume the problem is that the index type is set to java.lang.integer 
> by default, *is that true*? 
>  And it would also be greatly appreciated if some one can help me 
> understand what is the value 4386536741 here? I definitely don't have that 
> many nodes
>
> 2. What ever whose datatype it is, *how do I change that to 
> java.lang.long in the neo4j-import command*?
>
>
> Thanks a lot for your time.
>
>
>
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: Neo4j Traversal Framework

2016-03-14 Thread Mattias Persson
That looks like a classpath issue. How are you running neo4j, using a
downloaded server and the provided scripts or embedded and packaging the
dependencies manually? Can you list the classpath in use here?

On Mon, Mar 14, 2016 at 8:32 AM, Radheshyam Verma 
wrote:

> Hi Mattias Persson,
> I need help regarding an error I get when I start server. Can you help me
> find the cause? Here is the stacktrace:-
>
>  ERROR [org.neo4j]: Exception when stopping
> org.neo4j.index.impl.lucene.LuceneDataSource@2aad7c69
> org.neo4j.index.impl.lucene.LuceneDataSource.unbindLogicalLog()V
> java.lang.NoSuchMethodError:
> org.neo4j.index.impl.lucene.LuceneDataSource.unbindLogicalLog()V
> at
> org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:343)
> at
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
> at
> org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
> at
> org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
> at
> org.neo4j.index.lucene.LuceneKernelExtension.stop(LuceneKernelExtension.java:92)
> at
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at
> org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at
> org.neo4j.kernel.extension.KernelExtensions.stop(KernelExtensions.java:125)
> at
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at
> org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at
> org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
> at
> org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDatabase.java:809)
> at
> org.springframework.data.neo4j.support.DelegatingGraphDatabase.shutdown(DelegatingGraphDatabase.java:283)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.springframework.beans.factory.support.DisposableBeanAdapter.invokeCustomDestroyMethod(DisposableBeanAdapter.java:353)
> at
> org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:276)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:578)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:554)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:925)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:523)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingletons(DefaultListableBeanFactory.java:932)
> at
> org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:997)
> at
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:547)
> at
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:446)
> at
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:328)
> at
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107)
> at
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4760)
> at
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5184)
> at
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> at
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
> at
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
> at
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714)
> at
> org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:919)
> at
> org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1704)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>

[Neo4j] Re: Can not restart Neo4j after crashed

2016-02-22 Thread Mattias Persson
Recovery is usully quite fast (seconds or minutes). Can you grab a thread 
dump from when you believe it has hung? My suspicion is something regarding 
rotation of counts store.

On Thursday, February 18, 2016 at 2:48:49 AM UTC+1, 宋鹏 wrote:
>
> Well, when I came across the crashing issue, I restart the neo4j, and it 
> blocks at "performing recovery..", and I came back the next morning, and 
> it's still that information.
> Should I wait much longer for recovery?
>
> On Thursday, February 18, 2016 at 3:06:55 AM UTC+8, David Fauth wrote:
>>
>> How long did you let 2.2.3 try and recover? It has to recover the 
>> database successfully before it can be loaded into a new database.
>>
>> On Tuesday, February 16, 2016 at 7:51:26 AM UTC-5, 宋鹏 wrote:
>>>
>>> Hello, wish to get some help here...
>>>
>>> I deployed a neo4j server of version 2.2.3 several months ago, but it 
>>> crashed last week, and I can never restart it again.
>>> The starting process was blocked after printing the message:
>>>
>>> Detected incorrectly shut down database, performing recovery..

>>>
>>> To work around, I re-deployed a new instance of version 2.3.2, and copy 
>>> graph.db to the data folder of this new instance.
>>> However, with no luck, it also failed with the following message printed 
>>> many many times...
>>>
>>> Caused by: 
 org.neo4j.kernel.impl.storemigration.StoreUpgrader$UnexpectedUpgradingStoreVersionException:
  
 '/var/lib/neo4j/data/graph.db/neostore.nodestore.db' has a store version 
 number that we cannot upgrade from. Expected 'v0.A.3' but file is version 
 ''.

>>>
>>> Of course, I checked that the configuration was correct: 
>>>  allow_store_upgrade=true
>>>
>>> It would be a great help if any body happens to know something about 
>>> this. Thanks!
>>>
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: Neo4j Traversal Framework

2016-01-27 Thread Mattias Persson
You're correct in principle, but it's not a new "query" as such that is
sent every time. The traversal state is kept between calls to hasNext/next,
but that's just a detail.

On Wed, Jan 27, 2016 at 1:20 PM, Radheshyam Verma 
wrote:

> Thanks for the reply,
> So you mean that for each iteration of following 'FOR' loop, a query is
> sent to get next node from database if it exists.
> and that not all the nodes are returned in one go before 'FOR' loop even
> starts.
>
> for(Node currentNode : database.traversalDescription()
> .depthFirst()
> .uniqueness(Uniqueness.NODE_GLOBAL)
> .order(BranchOrderingPolicies.PREORDER_BREADTH_FIRST)
> .relationships(, Direction.BOTH)
> .evaluator(Evaluators.excludeStartPosition())
> .traverse(node)
> .nodes())
> {
>
> }
>
> Thanks for the response again.
>
>
> On Wed, Jan 27, 2016 at 2:24 PM, Mattias Persson <
> matt...@neotechnology.com> wrote:
>
>> With the traversal framework, the actual work of traversing happens
>> lazily on every call on hasNext/next on the returned Traverser (in the end
>> Iterator). You can simply stop pulling more paths after a certain
>> number of paths have been extracted.
>>
>>
>> On Monday, January 25, 2016 at 8:31:37 AM UTC+1, Radheshyam Verma wrote:
>>>
>>> Hi,
>>> I am using traversal framework to traverse graph which returns nodes.
>>> Can we somehow specify a limit on number of nodes which the traversal
>>> returns like we do in Query using "LIMIT" and "SKIP".
>>> Thanks.
>>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Neo4j" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/neo4j/2A1QiEOwofU/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> neo4j+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/2A1QiEOwofU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Neo4j Traversal Framework

2016-01-27 Thread Mattias Persson
With the traversal framework, the actual work of traversing happens lazily 
on every call on hasNext/next on the returned Traverser (in the end 
Iterator). You can simply stop pulling more paths after a certain 
number of paths have been extracted.

On Monday, January 25, 2016 at 8:31:37 AM UTC+1, Radheshyam Verma wrote:
>
> Hi,
> I am using traversal framework to traverse graph which returns nodes.
> Can we somehow specify a limit on number of nodes which the traversal 
> returns like we do in Query using "LIMIT" and "SKIP".
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: org.neo4j.unsafe.batchinsert.BatchInserter.shutdown() fails with IllegalArgumentException

2015-10-16 Thread Mattias Persson
This should have been fixed in 2.2.6 (excessive memory usage when building 
counts store in the end)

On Thursday, October 15, 2015 at 12:26:41 PM UTC+2, Qi Song wrote:
>
> Hello,
> I'm now trying to follow this page 
>  to load 
> Dbpedia.ttl into neo4j. The import seems work well, but when it use 
> db.shutdown() to close the batchInserter, I got several errors. I'm not 
> quite sure the reason of it. 
> One weird thing is that if I do not import any label (just neglect them 
> when parse them), I do not get these error. I doubt if something wrong with 
> index. Can anyone provide any suggestions?
>
> Exception in thread "main" java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
> Caused by: java.lang.RuntimeException: Panic called, so exiting
> at 
> org.neo4j.unsafe.impl.batchimport.staging.AbstractStep.assertHealthy(AbstractStep.java:200)
> at 
> org.neo4j.unsafe.impl.batchimport.staging.ProducerStep.process(ProducerStep.java:78)
> at 
> org.neo4j.unsafe.impl.batchimport.staging.ProducerStep$1.run(ProducerStep.java:54)
> Caused by: java.lang.IllegalArgumentException
> at sun.misc.Unsafe.allocateMemory(Native Method)
> at 
> org.neo4j.unsafe.impl.internal.dragons.UnsafeUtil.malloc(UnsafeUtil.java:324)
> at 
> org.neo4j.unsafe.impl.batchimport.cache.OffHeapNumberArray.(OffHeapNumberArray.java:41)
> at 
> org.neo4j.unsafe.impl.batchimport.cache.OffHeapLongArray.(OffHeapLongArray.java:34)
> at 
> org.neo4j.unsafe.impl.batchimport.cache.NumberArrayFactory$2.newLongArray(NumberArrayFactory.java:122)
> at 
> org.neo4j.unsafe.impl.batchimport.cache.NumberArrayFactory$Auto.newLongArray(NumberArrayFactory.java:154)
> at 
> org.neo4j.unsafe.impl.batchimport.RelationshipCountsProcessor.(RelationshipCountsProcessor.java:60)
> at 
> org.neo4j.unsafe.impl.batchimport.ProcessRelationshipCountsDataStep.processor(ProcessRelationshipCountsDataStep.java:73)
> at 
> org.neo4j.unsafe.impl.batchimport.ProcessRelationshipCountsDataStep.process(ProcessRelationshipCountsDataStep.java:60)
> at 
> org.neo4j.unsafe.impl.batchimport.ProcessRelationshipCountsDataStep.process(ProcessRelationshipCountsDataStep.java:36)
> at 
> org.neo4j.unsafe.impl.batchimport.staging.ProcessorStep$4.run(ProcessorStep.java:120)
> at 
> org.neo4j.unsafe.impl.batchimport.staging.ProcessorStep$4.run(ProcessorStep.java:102)
> at 
> org.neo4j.unsafe.impl.batchimport.executor.DynamicTaskExecutor$Processor.run(DynamicTaskExecutor.java:237)
>
> Bests~
> Qi Song
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Unique constraint not getting respected

2015-10-05 Thread Mattias Persson
Hi,

That definitely sounds worrying and unexpected. If you could make available 
code to reproduce this it would be most helpful.

On Sunday, October 4, 2015 at 7:17:06 AM UTC+2, varun kumar wrote:
>
> I have created a neo4j DB locally and assigned some indices and unique 
> constraint.
>
> Here is the :schema
>
> Indexes
>   ON :Actor(social_id) ONLINE (for uniqueness constraint) 
>   ON :Category(name)   ONLINE (for uniqueness constraint) 
>
> Constraints
>   ON (actor:Actor) ASSERT actor.social_id IS UNIQUE
>   ON (category:Category) ASSERT category.name IS UNIQUE
>
> Additionally I do a createIfNotFound() in my java code, where I first do a 
> find and then create if not found.
>
> Inspite of these two checks I still see multiple nodes of Label category 
> and same name existing in the DB.
>
>  MATCH (n:Category) WHERE n.name='garden' RETURN n
>
> namegarden
> namegarden
> namegarden
> namegarden
> Returned 4 rows in 49 ms.
>
> I am not sure what I am missing, and why is the unique constraint not 
> being honored.
>
>
> Would appreciate any pointers to debug.
>
>
> Thanks
>
> Varun
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: "Waiting for all transactions to close"

2015-09-14 Thread Mattias Persson
Store version 0.A.4 was used in milestones for 2.2 and upgrading from that
isn't supported. 0.A.3 is that of neo4j version 2.1, and 2.2.x has store
version 0.A.5. So you'll have to recreate your database created using a
milestone with version 2.2.5

On Tue, Sep 8, 2015 at 5:21 PM,  wrote:

> Mathias,
> I've just tried to install 2.2.5 on my dev server.
> It seems that store version of my db is v0.A.4 (neo4j 2.2.0) but neo4j
> 2.2.5 expects v0.A.3.
> So, it doesn't sound like an option for me. I fear that I'll have to wait
> for a stable 2.3.0 to upgrade my store to v5.
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/fQx9O3cu0n0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: "Waiting for all transactions to close"

2015-09-08 Thread Mattias Persson
On Mon, Sep 7, 2015 at 6:46 PM,  wrote:

> Hi Nigel, hi Mathias,
>
> Thanks for the suggestions !
>
> > You should try with Neo4j 2.2.5
> Do you think to a specific fix released in this version which may help
> here ?
> Anyway, I think I'll upgrade in a close future.
>

Yes, there have been resolved issues regarding bugs just like that, so it's
definitely a possibility.

>
> > I strongly suggest that you upgrade to py2neo 2.0 and, if possible,
> migrate to using Cypher transactions instead of batches.
> Yep. It's clearly a task in my todo list ! :)
> For reasons related to the planning of my project, I can't do it right now
> because it would need to be done in the rush and that sounds risky.
>
> Anyway, I think I've made some good progresses:
> - I've found why no exception was bubbling even with a call to submit().
> Basically, the exception was silenced by some of my code handling
> exceptions, Argh. Stupid me !
> - The scenario seems to be the following:
>   - a process writes a bunch of update/create with a WriteBatch
>   - a concurrent process tries to read some data with a CypherQuery.
>   - a lock is detected for the read request. As I've implemented a "retry"
> pattern around my calls to CypherQuery, the read request is sent again but
> the first one is never closed => Error message appearing in the logs of
> Neo4j server.
>
> I'm currently testing this modification of the submit() method:
>
> -
> def submit(self):
> try:
> responses = self._execute()
> return [BatchResponse(rs).hydrated for rs in responses.json]
> finally:
> if responses:
> responses.close()
>
> -
>
> So far, results seem good but I want the processes to run on a long period.
> I just want to be sure that putting the call to _execute() inside the
> try/except block won't have nasty side-effects (especially in case of an
> exception occurring in the _execute() method.
>
> laurent
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/fQx9O3cu0n0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: "Waiting for all transactions to close"

2015-09-07 Thread Mattias Persson
You should try with Neo4j 2.2.5

On Sunday, September 6, 2015 at 8:06:04 PM UTC+2, lauren...@gmail.com wrote:
>
> Hi Nigel,
>
> Thanks for the answer. Very much appreciated and always good to get a 
> confirmation from the expert :)
> I'm still investigating the case but progress is very slow because I can't 
> reproduce the problem with "artificial" simple code. 
>
> For now, my intuition is that a cypher query or a write batch is stuck and 
> locks the next ones. My main problem is that I can't get any log of an 
> exception.
> Currently, I'm checking if enabling execution_guard could help to unlock 
> the situation.
>
> Just one more question: In Py2neo 1.6, we have the following code for 
> WriteBatch:
>
> def run(self):
> return self._execute().close()
>
> def submit(self):
> responses = self._execute()
> try:
> return [BatchResponse(rs).hydrated for rs in responses.json]
> finally:
> responses.close()
>
> One of my previous test seems to indicate that run() don't forward 
> exception but more importantly for me, it seems that it doesn't freeze the 
> processes :)
> I guess this behavior is related to the execution of close(). So, I wonder 
> if a modification of submit() method may help to solve my problem. Do you 
> see a potential problem with this modification ?
>
> def submit(self):
> try:
> responses = self._execute()
>return [BatchResponse(rs).hydrated for rs in responses.json]
> finally:
> responses.close()
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] What do propertystore.db and propertystore.db.strings store exactly?

2015-07-22 Thread Mattias Persson
On Tue, Jul 21, 2015 at 10:27 PM, Zongheng Yang 
wrote:

> Thanks, Matthias.  If I understand you correctly, the result of my
> previous *2 calculation roughly matching the two store files on disk is not
> due to JVM String overhead, rather it is due to Neo4j's quantization
> overhead.
>
> There's one loose end that I wish to tighten: why does ~27GB of string
> characters -- which already contain wasted quantized bytes --  become ~56GB
> on JVM heap?  My only hypothesis is that these wasted bytes used in
> quantization somehow also roughly incur a *2 overhead, just as the useful
> bytes do when turned into String objects.  Is this right? If so, why do the
> useless bytes incur this overhead?
>

Yes when loaded (in 2.2) such properties will be kept as String objects,
that's correct... so that's probably responsible for the 2* overhead.
Sorry, I thought you talked about storage overhead. Anyways, that's how 2.2
works. 2.3 will have reduced heap overhead since it will have no "object"
caching like that.


> Thanks,
> Zongheng
>
> On Tue, Jul 21, 2015 at 2:04 AM Mattias Persson 
> wrote:
>
>> To clarify, it's not serialized String objects. Neo4j stores the
>> character data, either compacted by being able to use a smaller charset
>> than utf-8/ascii so that fewer bits per character is required, or if string
>> is "long" by neo4j measures, as plain characters into the
>> neostore.propertystore.db.strings store. The long strings may have much
>> bigger overhead, since the character data is quantized into 60/120 byte
>> records. That's probably the inflation that you're seeing.
>>
>>
>> On Saturday, July 18, 2015 at 9:20:43 PM UTC+2, Zongheng Yang wrote:
>>
>>> If those are serialized String objects then I'm seeing the following
>>> mismatch between measurement and calculation:
>>>
>>
>>> The graph I'm using has 4.9 million nodes, each of which has 40 string
>>> properties (each of which has 16 characters).  It has 70 million directed
>>> edges, each of which has 1 string property with 140 characters which.
>>>
>>> Assuming JVM String objects incur a 2x overhead, then the total
>>> in-memory size of these properties are: (40*16*4.9*10^6 + 70*10^6*140) * 2
>>> / 2^30 = 24 GB.  This roughly matches the on-disk footprint:
>>>
>>> 10G neostore.propertystore.db
>>> 17G neostore.propertystore.db.strings
>>>
>>> So I think this matches Chris' explanation well (these two store files
>>> are serialized String objects).
>>>
>>> However, after this warmup [1] to load the whole graph including node &
>>> relationship properties, the JVM heap memory usage is: Max 68.6 GB,
>>> Allocated 67.6 GB, Used 55.9 GB.
>>>
>>> Where does this mismatch (56GB vs. < 30GB) come from?  What's wrong in
>>> my calculation & understanding?  It cannot be the other stores (node /
>>> relationship) as `du -shc *store.db*` returns 29GB total on-disk, 27GB of
>>> which are the properties.
>>>
>>> [1] http://pastebin.com/9WjfYEb1
>>>
>>> Any help would be appreciated!
>>>
>>> Zongheng
>>>
>>> On Sat, Jul 18, 2015 at 3:05 AM Chris Vest 
>>> wrote:
>>>
>> Does this sound right?  Also, node properties & relationship properties
>>>> are interleaved and stored together in these files, right?
>>>>
>>>>
>>>> Yes and yes.
>>>>
>>>> Lastly -- is everything in (1) and (2) deserialized JVM objects in raw
>>>> bytes *or* just UTF-8 characters?  It could make a difference, since if
>>>> neo4j needs to create a new String object out of the bytes read from these
>>>> files, then the memory footprint could be larger than the on-disk file size
>>>> due to object overhead.
>>>>
>>>>
>>>> We use a dozen different encodings depending on the contents of the
>>>> given string. It’s not like compression, but it does reduce space usage in
>>>> many cases. The embedded API deals in String objects, so we have to
>>>> serialise and deserialise to support that. If you set cache_type=none, then
>>>> the overhead of the String objects should be low as there would be a lot
>>>> fewer of them.
>>>>
>>>>
>>>> --
>>>> Chris Vest
>>>> System Engineer, Neo Technology
>>>> [ skype: mr.chrisvest, twitter: chvest ]
>>>>
>>>>
>>>> 

Re: [Neo4j] What do propertystore.db and propertystore.db.strings store exactly?

2015-07-21 Thread Mattias Persson
To clarify, it's not serialized String objects. Neo4j stores the character 
data, either compacted by being able to use a smaller charset than 
utf-8/ascii so that fewer bits per character is required, or if string is 
"long" by neo4j measures, as plain characters into the 
neostore.propertystore.db.strings store. The long strings may have much 
bigger overhead, since the character data is quantized into 60/120 byte 
records. That's probably the inflation that you're seeing.

On Saturday, July 18, 2015 at 9:20:43 PM UTC+2, Zongheng Yang wrote:
>
> If those are serialized String objects then I'm seeing the following 
> mismatch between measurement and calculation:
>
> The graph I'm using has 4.9 million nodes, each of which has 40 string 
> properties (each of which has 16 characters).  It has 70 million directed 
> edges, each of which has 1 string property with 140 characters which.
>
> Assuming JVM String objects incur a 2x overhead, then the total in-memory 
> size of these properties are: (40*16*4.9*10^6 + 70*10^6*140) * 2 / 2^30 = 
> 24 GB.  This roughly matches the on-disk footprint:
>
> 10G neostore.propertystore.db
> 17G neostore.propertystore.db.strings
>
> So I think this matches Chris' explanation well (these two store files are 
> serialized String objects).
>
> However, after this warmup [1] to load the whole graph including node & 
> relationship properties, the JVM heap memory usage is: Max 68.6 GB, 
> Allocated 67.6 GB, Used 55.9 GB. 
>
> Where does this mismatch (56GB vs. < 30GB) come from?  What's wrong in my 
> calculation & understanding?  It cannot be the other stores (node / 
> relationship) as `du -shc *store.db*` returns 29GB total on-disk, 27GB of 
> which are the properties.
>
> [1] http://pastebin.com/9WjfYEb1
>
> Any help would be appreciated!
>
> Zongheng
>
> On Sat, Jul 18, 2015 at 3:05 AM Chris Vest  > wrote:
>
>> Does this sound right?  Also, node properties & relationship properties 
>> are interleaved and stored together in these files, right?
>>
>>
>> Yes and yes.
>>
>> Lastly -- is everything in (1) and (2) deserialized JVM objects in raw 
>> bytes *or* just UTF-8 characters?  It could make a difference, since if 
>> neo4j needs to create a new String object out of the bytes read from these 
>> files, then the memory footprint could be larger than the on-disk file size 
>> due to object overhead.
>>
>>
>> We use a dozen different encodings depending on the contents of the given 
>> string. It’s not like compression, but it does reduce space usage in many 
>> cases. The embedded API deals in String objects, so we have to serialise 
>> and deserialise to support that. If you set cache_type=none, then the 
>> overhead of the String objects should be low as there would be a lot fewer 
>> of them.
>>
>>
>> --
>> Chris Vest
>> System Engineer, Neo Technology
>> [ skype: mr.chrisvest, twitter: chvest ]
>>
>>  
>> On 17 Jul 2015, at 17:19, Zongheng Yang > 
>> wrote:
>>
>> Thanks!  Some follow-ups: my current understanding is that,
>>
>> (1) propertystore.db:  metadata (as you mentioned) + possibly inlined 
>> short strings/fields [otherwise, pointers]
>> (2) propertystore.db.strings:  long string properties
>>
>> Does this sound right?  Also, node properties & relationship properties 
>> are interleaved and stored together in these files, right?
>>
>>
>> Lastly -- is everything in (1) and (2) deserialized JVM objects in raw 
>> bytes *or* just UTF-8 characters?  It could make a difference, since if 
>> neo4j needs to create a new String object out of the bytes read from these 
>> files, then the memory footprint could be larger than the on-disk file size 
>> due to object overhead.
>>
>> Cheers,
>> Zongheng
>>
>> On Fri, Jul 17, 2015 at 5:56 AM Chris Vest > > wrote:
>>
>>> The propertystore.db file also has metadata about which entities a 
>>> property belongs to, what the property names are, what type the value of a 
>>> property has, and where to find the property values in cases where those 
>>> are stored in other files such as propertystore.db.strings.
>>>
>>> --
>>> Chris Vest
>>> System Engineer, Neo Technology
>>> [ skype: mr.chrisvest, twitter: chvest ]
>>>
>>>  
>>> On 16 Jul 2015, at 20:01, Zongheng Yang >> > wrote:
>>>
>>> Hi all,
>>>
>>> Quick question: what do propertystore.db and propertystore.db.strings 
>>> store, respectively?
>>>
>>> My CSV headers look like these:
>>>
>>> edges -- :START_ID, :END_ID, :TYPE, timestamp:LONG, attr
>>> nodes -- :ID, name0, ..., name39
>>>
>>> And propertystore.db totals 10GB, propertystore.db.strings totals 17GB.  
>>> I did a quick calculation, assuming those two files store serialized JVM 
>>> Strings, all the node properties should total 6GB in memory, and all the 
>>> edge properties should total 17GB in memory -- the first number doesn't 
>>> match the size of propertystore.db, so I am a bit confused.
>>>
>>> Thanks in advance,
>>> Zongheng
>>>
>>> -- 
>>>
>>> You received this message because you are subscribed to t

Re: [Neo4j] TransactionEventHandler bug?

2015-07-21 Thread Mattias Persson
https://github.com/neo4j/neo4j/pull/5010 should fix the afterCommit issue 
you're seeing. That PR fixes so that there's no transaction associated with 
the current thread when afterCommit is called, and so a new transaction can 
be opened there.

On Friday, July 17, 2015 at 1:02:00 PM UTC+2, Mattias Persson wrote:
>
> In the meantime I believe this PR will reduce confusion regarding 
> transaction events: https://github.com/neo4j/neo4j/pull/4991
>
> On Thursday, July 16, 2015 at 11:28:49 PM UTC+2, Mattias Persson wrote:
>>
>> Oh I can reproduce that now. Yup, that's a problem, right there
>>
>> On Thursday, July 16, 2015 at 4:45:58 PM UTC+2, Clark Richey wrote:
>>>
>>> So when I run my tests without the executor I get an error that the 
>>> transaction has already completed when I try to examine properties of 
>>> created / updated nodes. Creating a new transaction doesn't help. 
>>>
>>> Sent from my iPhone
>>>
>>> On Jul 16, 2015, at 04:57, Mattias Persson  
>>> wrote:
>>>
>>> Hi Clark,
>>>
>>> I've converted your groovy code into java and run that in a unit test. 
>>> As far as I can see all events trigger as they should.
>>>
>>> What I'm worried about is the pattern that seems to be promoted here, 
>>> namely to queue the event processing in an executor service. The 
>>> transaction state of a transaction is local to the thread executing the 
>>> transaction and simply handing over that state to another thread without 
>>> any memory barriers isn't safe, i.e. it may yield unpredictable results.
>>>
>>> My advice to you is to simply scrap the executor service and execute 
>>> whatever logic you need within the same thread that gets the 
>>> beforeCommit/afterCommit calls. I'll also talk to Max about this, and 
>>> update the javadocs.
>>>
>>> Please try this and report back with results.
>>>
>>> Best,
>>> Mattias
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] TransactionEventHandler bug?

2015-07-17 Thread Mattias Persson
In the meantime I believe this PR will reduce confusion regarding 
transaction events: https://github.com/neo4j/neo4j/pull/4991

On Thursday, July 16, 2015 at 11:28:49 PM UTC+2, Mattias Persson wrote:
>
> Oh I can reproduce that now. Yup, that's a problem, right there
>
> On Thursday, July 16, 2015 at 4:45:58 PM UTC+2, Clark Richey wrote:
>>
>> So when I run my tests without the executor I get an error that the 
>> transaction has already completed when I try to examine properties of 
>> created / updated nodes. Creating a new transaction doesn't help. 
>>
>> Sent from my iPhone
>>
>> On Jul 16, 2015, at 04:57, Mattias Persson  
>> wrote:
>>
>> Hi Clark,
>>
>> I've converted your groovy code into java and run that in a unit test. As 
>> far as I can see all events trigger as they should.
>>
>> What I'm worried about is the pattern that seems to be promoted here, 
>> namely to queue the event processing in an executor service. The 
>> transaction state of a transaction is local to the thread executing the 
>> transaction and simply handing over that state to another thread without 
>> any memory barriers isn't safe, i.e. it may yield unpredictable results.
>>
>> My advice to you is to simply scrap the executor service and execute 
>> whatever logic you need within the same thread that gets the 
>> beforeCommit/afterCommit calls. I'll also talk to Max about this, and 
>> update the javadocs.
>>
>> Please try this and report back with results.
>>
>> Best,
>> Mattias
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to neo4j+un...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] TransactionEventHandler bug?

2015-07-16 Thread Mattias Persson
Oh I can reproduce that now. Yup, that's a problem, right there

On Thursday, July 16, 2015 at 4:45:58 PM UTC+2, Clark Richey wrote:
>
> So when I run my tests without the executor I get an error that the 
> transaction has already completed when I try to examine properties of 
> created / updated nodes. Creating a new transaction doesn't help. 
>
> Sent from my iPhone
>
> On Jul 16, 2015, at 04:57, Mattias Persson  > wrote:
>
> Hi Clark,
>
> I've converted your groovy code into java and run that in a unit test. As 
> far as I can see all events trigger as they should.
>
> What I'm worried about is the pattern that seems to be promoted here, 
> namely to queue the event processing in an executor service. The 
> transaction state of a transaction is local to the thread executing the 
> transaction and simply handing over that state to another thread without 
> any memory barriers isn't safe, i.e. it may yield unpredictable results.
>
> My advice to you is to simply scrap the executor service and execute 
> whatever logic you need within the same thread that gets the 
> beforeCommit/afterCommit calls. I'll also talk to Max about this, and 
> update the javadocs.
>
> Please try this and report back with results.
>
> Best,
> Mattias
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] TransactionEventHandler bug?

2015-07-16 Thread Mattias Persson
Hi Clark,

I've converted your groovy code into java and run that in a unit test. As 
far as I can see all events trigger as they should.

What I'm worried about is the pattern that seems to be promoted here, 
namely to queue the event processing in an executor service. The 
transaction state of a transaction is local to the thread executing the 
transaction and simply handing over that state to another thread without 
any memory barriers isn't safe, i.e. it may yield unpredictable results.

My advice to you is to simply scrap the executor service and execute 
whatever logic you need within the same thread that gets the 
beforeCommit/afterCommit calls. I'll also talk to Max about this, and 
update the javadocs.

Please try this and report back with results.

Best,
Mattias

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: neo4j-import non-deterministically corrupts a few node ids

2015-06-16 Thread Mattias Persson
Yes, I agree --id-type ACTUAL will guarantee this constraint.

On Monday, June 15, 2015 at 9:43:38 PM UTC+2, Zongheng Yang wrote:
>
> Fantastic, in my case the ids are exactly the sequence [0, 1, ..., N] 
> without gaps, unique, and in that order.
>
> Thanks both of you for the help!
>
> On Monday, June 15, 2015 at 12:34:18 PM UTC-7, Michael Hunger wrote:
>>
>> No, --id-type actual 
>> would but then you have to make sure to have globally unique incrementing 
>> id's without large holes in the distribution.
>>
>>
>> Am 15.06.2015 um 21:31 schrieb Zongheng Yang :
>>
>> I see.  Would setting the `--processors 1` flag for neo4j-import make 
>> internal ids and external ids match in my case?  (I understand this is an 
>> implementation detail and not a user-facing property.)
>>
>> On Monday, June 15, 2015 at 12:07:56 PM UTC-7, Michael Hunger wrote:
>>>
>>> GraphDatabaseService#getNodeById(long id)
>>>
>>>
>>> takes Neo4j internal ids.
>>>
>>> Michael
>>>
>>> Am 15.06.2015 um 20:59 schrieb Zongheng Yang :
>>>
>>> Hi Mattias,
>>>
>>> Thanks for looking into this.  I understand the difference between Neo4j 
>>> internal ids vs. the ids supplied in the csv. 
>>>
>>> However for say GraphDatabaseService#getNodeById(long id), does this 
>>> function take the user-supplied ids or Neo4j's internal ids?
>>>
>>> If it is the former: then the conceptual mismatch doesn't fully explain 
>>> the problem (e.g. I queried the nodes/edges using user-supplied ids, and 
>>> the internal ids should not mess up with the query results).  If it is the 
>>> latter, then for users programming using the Java Core API, how should they 
>>> get these correct internal ids (they only know application-supplied ids).
>>>
>>> Best,
>>> Zongheng
>>>
>>> On Monday, June 15, 2015 at 5:23:24 AM UTC-7, Mattias Persson wrote:
>>>>
>>>> Hello again, I'm quite confident I know what's happening here. The 
>>>> problem is the misconception that your INTEGER ids defined in the csv 
>>>> files 
>>>> will map 1-to-1 to the neo4j node/relationship ids in the store. They will 
>>>> actually match in most cases, but that's merely a coincidence.
>>>>
>>>> What you're seeing is the result of some parallelism happening in the 
>>>> importer where batches of 10k nodes/relationships flows through different 
>>>> steps, where some steps may execute multiple batches in parallel and 
>>>> doesn't care if reordering happens. Ids are assigned at the end.
>>>>
>>>> You're looking at the ids and see that they mismatch, but if you look 
>>>> at their data you should see that all relationships match the csv files. 
>>>> So 
>>>> please disregard the seemingly close match of neo4j node/relationship ids 
>>>> with the csv input ids as they are quite different in nature.
>>>>
>>>> On Thursday, June 11, 2015 at 11:32:55 AM UTC+2, Mattias Persson wrote:
>>>>>
>>>>> Hi, I'm one of the main authors of the import tool and I find this 
>>>>> issue quite interesting.
>>>>>
>>>>> Would you be able to share your dataset with me personally, for the 
>>>>> single purpose of trying to find the root cause?
>>>>>
>>>>> On Friday, June 5, 2015 at 5:12:43 AM UTC+2, Zongheng Yang wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I'm using neo4j-import to import nodes and relationships from csv 
>>>>>> files. Let's say node id 538398 has about 100 edges and
>>>>>>
>>>>>> 538398 -> 370047
>>>>>> 538398 -> 379981
>>>>>>
>>>>>> are just two of them.  After the import, the neo4j database actually 
>>>>>>
>>>>>> - *loses* these two edges
>>>>>> - instead *corrupts* the destination ids, as follows
>>>>>>
>>>>>> 538398 -> 380047
>>>>>> 538398 -> 389981
>>>>>>
>>>>>> - *keeps* all other outgoing edges of 538398 correct
>>>>>>
>>>>>> The problem seems to be non-deterministic: doing a `rm -rf dbPath` 
>>>>>> and re-running neo4j-import seems to fix the issue, for this particular 
>>>>>> node -- but I've not done extensive tests to see whether other nodes get 
>>>>>> corrupted in this way.
>>>>>>
>>>>>> Has anyone seen this before? The graph has on the order of 1 million 
>>>>>> node, average degree 40. 
>>>>>>
>>>>>> Zongheng
>>>>>>
>>>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to neo4j+un...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: neo4j-import non-deterministically corrupts a few node ids

2015-06-15 Thread Mattias Persson
Hello again, I'm quite confident I know what's happening here. The problem 
is the misconception that your INTEGER ids defined in the csv files will 
map 1-to-1 to the neo4j node/relationship ids in the store. They will 
actually match in most cases, but that's merely a coincidence.

What you're seeing is the result of some parallelism happening in the 
importer where batches of 10k nodes/relationships flows through different 
steps, where some steps may execute multiple batches in parallel and 
doesn't care if reordering happens. Ids are assigned at the end.

You're looking at the ids and see that they mismatch, but if you look at 
their data you should see that all relationships match the csv files. So 
please disregard the seemingly close match of neo4j node/relationship ids 
with the csv input ids as they are quite different in nature.

On Thursday, June 11, 2015 at 11:32:55 AM UTC+2, Mattias Persson wrote:
>
> Hi, I'm one of the main authors of the import tool and I find this issue 
> quite interesting.
>
> Would you be able to share your dataset with me personally, for the single 
> purpose of trying to find the root cause?
>
> On Friday, June 5, 2015 at 5:12:43 AM UTC+2, Zongheng Yang wrote:
>>
>> Hi all,
>>
>> I'm using neo4j-import to import nodes and relationships from csv files. 
>> Let's say node id 538398 has about 100 edges and
>>
>> 538398 -> 370047
>> 538398 -> 379981
>>
>> are just two of them.  After the import, the neo4j database actually 
>>
>> - *loses* these two edges
>> - instead *corrupts* the destination ids, as follows
>>
>> 538398 -> 380047
>> 538398 -> 389981
>>
>> - *keeps* all other outgoing edges of 538398 correct
>>
>> The problem seems to be non-deterministic: doing a `rm -rf dbPath` and 
>> re-running neo4j-import seems to fix the issue, for this particular node -- 
>> but I've not done extensive tests to see whether other nodes get corrupted 
>> in this way.
>>
>> Has anyone seen this before? The graph has on the order of 1 million 
>> node, average degree 40. 
>>
>> Zongheng
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Neo4j cant't start after wrong shutdown: PageCache error

2015-06-11 Thread Mattias Persson
I think that the exception seen here is hiding the real exception. 
Basically something goes wrong during startup and so neo4j tries to shut 
down and back out before throwing that exception out to the user. Shutting 
down somehow results in this exception and so that gets thrown instead of 
the real exception. I'll have a look at trying to reproduce with dummy 
exceptions at different points in startup.

On Tuesday, June 9, 2015 at 1:02:26 AM UTC+2, Péterson Sampaio Procópio 
Júnior wrote:
>
> Hello all,
>
> after what was probably a wrong shutdown, I am failing to start Neo4j 
> server. The error occurred in the 2.2.0 windows version of Neo4j. I 
> upgraded to 2.2.2, but it still remains.
> The relevant log message follows below. I think the error is related, as 
> the message says, to the cache mechanism.
>
>
> 2015-06-08 19:48:05.251+ ERROR [o.n.s.d.LifecycleManagingDatabase]: 
>> Failed to start database.
>> org.neo4j.kernel.lifecycle.LifecycleException: Failed to transition 
>> component 'org.neo4j.kernel.impl.pagecache.PageCacheLifecycle@5f8dbdab' 
>> from STOPPED to SHUTTING_DOWN. Please see attached cause exception
>> ...
>> Caused by: java.lang.IllegalStateException: Cannot close the PageCache 
>> while files are still mapped:
>> neostore.propertystore.db.index (1 mapping)
>> neostore.propertystore.db.index.keys (1 mapping)
>> neostore.labeltokenstore.db (1 mapping)
>> neostore.labeltokenstore.db.names (1 mapping)
>> neostore.relationshiptypestore.db (1 mapping)
>> neostore.relationshiptypestore.db.names (1 mapping)
>> at 
>> org.neo4j.io.pagecache.impl.muninn.MuninnPageCache.close(MuninnPageCache.java:483)
>>  
>> ~[neo4j-io-2.2.2.jar:2.2.2]
>> at 
>> org.neo4j.kernel.impl.pagecache.PageCacheLifecycle.shutdown(PageCacheLifecycle.java:42)
>>  
>> ~[neo4j-kernel-2.2.2.jar:2.2.2]
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:555)
>>  
>> [neo4j-kernel-2.2.2.jar:2.2.2]
>> ... 11 common frames omitted
>>
>
>
> The most similar situation I found, and it's solution, are described in 
> this stackoverflow answe 
> 
> r:
>
> I can think of two causes here:
>>
>>1. another java process is accessing some of the files, unless 
>>something other helps consider a kill -9 
>>2. double check the file permissions in your graph.db folder.
>>
>>
> However, this seems to not be the same case I am running into.
> Could anyone help me with this?
> I attached the relavant pieces of my messages.log.
>
> Thank you,
>
> Péterson
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: neo4j-import non-deterministically corrupts a few node ids

2015-06-11 Thread Mattias Persson
Hi, I'm one of the main authors of the import tool and I find this issue 
quite interesting.

Would you be able to share your dataset with me personally, for the single 
purpose of trying to find the root cause?

On Friday, June 5, 2015 at 5:12:43 AM UTC+2, Zongheng Yang wrote:
>
> Hi all,
>
> I'm using neo4j-import to import nodes and relationships from csv files. 
> Let's say node id 538398 has about 100 edges and
>
> 538398 -> 370047
> 538398 -> 379981
>
> are just two of them.  After the import, the neo4j database actually 
>
> - *loses* these two edges
> - instead *corrupts* the destination ids, as follows
>
> 538398 -> 380047
> 538398 -> 389981
>
> - *keeps* all other outgoing edges of 538398 correct
>
> The problem seems to be non-deterministic: doing a `rm -rf dbPath` and 
> re-running neo4j-import seems to fix the issue, for this particular node -- 
> but I've not done extensive tests to see whether other nodes get corrupted 
> in this way.
>
> Has anyone seen this before? The graph has on the order of 1 million node, 
> average degree 40. 
>
> Zongheng
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-05-12 Thread Mattias Persson
It should be included in 2.2.2 when that gets released, also the next 2.3 
milestone (2.3-M02)

On Wednesday, May 6, 2015 at 2:19:30 PM UTC+2, Rita wrote:
>
> Thank you very much for the bug fix
>  https://github.com/neo4j/neo4j/pull/4509 
> <https://github.com/neo4j/neo4j/pull/4509>
> When will be possibile to have this solution into a new release?
>
> Rita
>
> Il giorno lunedì 20 aprile 2015 10:37:15 UTC+2, Rita ha scritto:
>>
>> Hi Mattias,
>> thank you for the reply. I opened this issue 
>> https://github.com/neo4j/neo4j/issues/ . So you confirm that the 
>> java heap space error is caused by a possible bug in index handling. In 
>> this case I'm going to wait for news about a fix to have a retry in future, 
>> I hope soon!
>> Thank you for your work!
>>
>> Rita
>>
>>
>> Il giorno lunedì 20 aprile 2015 09:25:18 UTC+2, Mattias Persson ha 
>> scritto:
>>>
>>> Hi, the IndexHits instance returned isn't putting everything in that set 
>>> right away, but over time to avoid returning duplicates when combining 
>>> transaction state and store state. Looking at it right now I see that this 
>>> can be made much better by returning hits from transaction state first, 
>>> putting _only_ those ids into that set and then comparing - but not adding 
>>> - when iterating returning ids from store.
>>>
>>> Let me see if I can get around fixing that soon...
>>>
>>> On Friday, April 17, 2015 at 3:27:21 PM UTC+2, Rita wrote:
>>>>
>>>> I am very sorry to point out that those Lucene queries actually have a 
>>>> changeable behaviour, usually they are slower than the past, and in most 
>>>> case I get that error again (using 6GB heap of 8GB total RAM).
>>>>
>>>> I am updating the graph with delete and update of nodes and 
>>>> relationships. I decreased the number of operations per transaction but 
>>>> most of times I still got this error.
>>>>
>>>> The insertion with Batch Inserter instead seems to be ok!
>>>>
>>>> What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
>>>> could be a solution? The new version covers packages related to these 
>>>> problems?
>>>>
>>>>
>>>> Thanks in advance
>>>>
>>>> Rita
>>>>
>>>> Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:
>>>>>
>>>>> Thank you for the reply Michael. I have just published the issue.
>>>>> I was using as usual -Xmx4g. Now I've just tried with 6GB and  open 
>>>>> and close a single transaction for every query like that on the different 
>>>>> indexes and I do not get this exception.
>>>>> So now it has more need of memory for the same operation. I  try on 
>>>>> other cases. Tell me if there are news on the issue please.
>>>>> Thank you.
>>>>>
>>>>> Regards
>>>>> Rita
>>>>>
>>>>> Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha 
>>>>> scritto:
>>>>>>
>>>>>> This seems to be a like a bug.
>>>>>>
>>>>>> How much heap do you have? 
>>>>>>
>>>>>> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>>>>>>
>>>>>> Thanks so much
>>>>>>
>>>>>> Michael
>>>>>>
>>>>>> Am 16.04.2015 um 11:26 schrieb Rita :
>>>>>>
>>>>>> Hi all,
>>>>>> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode 
>>>>>> using java. I have inserted the transactions also for read operations, 
>>>>>> but 
>>>>>> now when I query my Lucene indexes as this
>>>>>>
>>>>>> rhits = index.query("cs", "*");
>>>>>> out.println("#" + rhits.size());
>>>>>> rhits.close();
>>>>>>
>>>>>>
>>>>>> as you can see I do not have to iterate over the result, I need only 
>>>>>> the number of results but this new Neo4j version looks like loading all 
>>>>>> in 
>>>>>> memory and I get the following error in the first instruction.
>>>>>>
>>>>>> Exception in thread "main" java.lang.OutOfMemoryError: 

Re: [Neo4j] Schema#awaitIndexOnline forever?

2015-05-11 Thread Mattias Persson
Yes, Michael is correct, you must await the index to be ONLINE in a 
separate transaction because the index starts to populate first when 
closing the transactions.

On Monday, May 11, 2015 at 2:35:58 AM UTC+2, Michael Hunger wrote:
>
> Florent, can you best raise that as an GitHub issue?
>
> How much data is in your test-database?
>
> What happens if you run the await in a separate tx ?
>
> Michael
>
> Am 10.05.2015 um 14:49 schrieb Florent Biville  >:
>
> Hi, 
>
> I'm trying to run the following snippet (with Neo4j v2.2.1 / impermanent 
> graph database):
>
> try (Transaction tx = graphDB.beginTx()) {
> IndexDefinition definition = graphDB.schema()
> .indexFor(Labels.ARTIST)
> .on("name")
> .create();
>
> graphDB.schema().awaitIndexOnline(definition, XXX, TimeUnit.MILLISECONDS);
>
> // [...]
>
> }
>
>
>
> No matter how high I set XXX, an IllegalStateException will always be thrown.
>
> Is this specific to the impermanent graph database?
>
>
> Thanks a lot for your help!
>
> Florent
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Neo4j-Import Shell

2015-04-20 Thread Mattias Persson
You seem to have 2^16 (65536) or more different relationship types, neo4j 
doesn't support that. Is that intentional in your dataset?

On Sunday, April 19, 2015 at 4:26:44 PM UTC+2, Jesse Liu wrote:
>
> Dear All,
>  
> I'd like to import the data using the neo4j-impot tools.
>  
> And I used the shell command:
> $Neo4j_Home$/bin/neo4j-import --into /Neo4j/data/201407MD/ --nodes 
> node-header.csv,node.csv --relationships 
> relationship-header.csv,relationship.csv
>  
> But I've got the error. The messages.log is attached.
>  
> My re-exam my csv file several times and cannot find the reason.
>  
> Can anybody help me with this issue?
>  
> Thanks!
>  
> Best Regards!
> Yours, Jesse
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-04-20 Thread Mattias Persson
Hi, the IndexHits instance returned isn't putting everything in that set 
right away, but over time to avoid returning duplicates when combining 
transaction state and store state. Looking at it right now I see that this 
can be made much better by returning hits from transaction state first, 
putting _only_ those ids into that set and then comparing - but not adding 
- when iterating returning ids from store.

Let me see if I can get around fixing that soon...

On Friday, April 17, 2015 at 3:27:21 PM UTC+2, Rita wrote:
>
> I am very sorry to point out that those Lucene queries actually have a 
> changeable behaviour, usually they are slower than the past, and in most 
> case I get that error again (using 6GB heap of 8GB total RAM).
>
> I am updating the graph with delete and update of nodes and relationships. 
> I decreased the number of operations per transaction but most of times I 
> still got this error.
>
> The insertion with Batch Inserter instead seems to be ok!
>
> What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
> could be a solution? The new version covers packages related to these 
> problems?
>
>
> Thanks in advance
>
> Rita
>
> Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:
>>
>> Thank you for the reply Michael. I have just published the issue.
>> I was using as usual -Xmx4g. Now I've just tried with 6GB and  open and 
>> close a single transaction for every query like that on the different 
>> indexes and I do not get this exception.
>> So now it has more need of memory for the same operation. I  try on other 
>> cases. Tell me if there are news on the issue please.
>> Thank you.
>>
>> Regards
>> Rita
>>
>> Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha 
>> scritto:
>>>
>>> This seems to be a like a bug.
>>>
>>> How much heap do you have? 
>>>
>>> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>>>
>>> Thanks so much
>>>
>>> Michael
>>>
>>> Am 16.04.2015 um 11:26 schrieb Rita :
>>>
>>> Hi all,
>>> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using 
>>> java. I have inserted the transactions also for read operations, but now 
>>> when I query my Lucene indexes as this
>>>
>>> rhits = index.query("cs", "*");
>>> out.println("#" + rhits.size());
>>> rhits.close();
>>>
>>>
>>> as you can see I do not have to iterate over the result, I need only the 
>>> number of results but this new Neo4j version looks like loading all in 
>>> memory and I get the following error in the first instruction.
>>>
>>> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>>> at org.neo4j.collection.primitive.hopscotch.
>>> IntArrayBasedKeyTable.initia
>>> lizeTable(IntArrayBasedKeyTable.java:54)
>>> at org.neo4j.collection.primitive.hopscotch.
>>> IntArrayBasedKeyTable.
>>> (IntArrayBasedKeyTable.java:48)
>>> at org.neo4j.collection.primitive.hopscotch.LongKeyTable.(
>>> LongKeyT
>>> able.java:27)
>>> at org.neo4j.collection.primitive.Primitive.longSet(Primitive.
>>> java:66)
>>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
>>> LegacyIndexPr
>>> oxy.java:296)
>>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.wrapIndexHits(
>>> LegacyIn
>>> dexProxy.java:294)
>>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
>>> LegacyIndexProxy
>>> .java:352)
>>>
>>> I never get this with older versions of Neo4j! I always did this 
>>> operation until version 1.9.9. 
>>> Could you please help me to avoid this? Is it a bug of library 
>>> implementation or I have to change the way of querying?
>>>
>>> Thanks in advance,
>>> Rita
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Unable to upgrade 2.1.6 database to 2.2.0-RC1

2015-03-12 Thread Mattias Persson
Hi, I've found and fixed the problem. Here is the pull request: 
https://github.com/neo4j/neo4j/pull/4193 . It will be included in 2.2 GA

Best,
Mattias

On Tuesday, March 10, 2015 at 8:37:01 AM UTC+1, Michael Hunger wrote:
>
> Could you share the full log with me as well as a listing of the db-dir 
> content with sizes?
>
> Von meinem iPhone gesendet
>
> Am 10.03.2015 um 04:08 schrieb bi...@levelstory.com :
>
> This looks like the relevant portion of the log:
>
> 2015-03-09 21:19:18.514+ INFO  [o.n.s.e.EnterpriseNeoServer]: Setting 
> startup timeout to: 12ms based on 12
> 2015-03-09 21:19:18.741+ INFO  [o.n.k.i.s.StoreMigrationTool]: 
> Starting upgrade of database store files
> 2015-03-09 21:19:18.792+ ERROR [o.n.s.p.PerformUpgradeIfNecessary]: 
> Unknown error
> java.lang.NegativeArraySizeException: null
> at 
> org.neo4j.kernel.impl.transaction.command.PhysicalLogNeoCommandReaderV1$PhysicalNeoCommandReader.readDynamicRecord(PhysicalLogNeoCommandReaderV1.java:507)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.command.PhysicalLogNeoCommandReaderV1$PhysicalNeoCommandReader.readPropertyRecord(PhysicalLogNeoCommandReaderV1.java:582)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.command.PhysicalLogNeoCommandReaderV1$PhysicalNeoCommandReader.visitPropertyCommand(PhysicalLogNeoCommandReaderV1.java:293)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.command.Command$PropertyCommand.handle(Command.java:325)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.command.PhysicalLogNeoCommandReaderV1.read(PhysicalLogNeoCommandReaderV1.java:185)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.log.entry.LogEntryParsersV4$4.parse(LogEntryParsersV4.java:134)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.log.entry.LogEntryParsersV4$4.parse(LogEntryParsersV4.java:126)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.log.entry.VersionAwareLogEntryReader.readLogEntry(VersionAwareLogEntryReader.java:80)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.log.entry.LogEntryReaderFactory$1.readLogEntry(LogEntryReaderFactory.java:71)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.transaction.log.entry.LogEntryReaderFactory$1.readLogEntry(LogEntryReaderFactory.java:67)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.legacylogs.LogEntrySortingCursor.perhapsFetchEntriesFromChannel(LogEntrySortingCursor.java:90)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.legacylogs.LogEntrySortingCursor.next(LogEntrySortingCursor.java:61)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.legacylogs.LegacyLogs.getTransactionChecksum(LegacyLogs.java:131)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.StoreMigrator.extractTransactionChecksum(StoreMigrator.java:248)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.StoreMigrator.migrate(StoreMigrator.java:188)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.StoreUpgrader.migrateToIsolatedDirectory(StoreUpgrader.java:282)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.StoreUpgrader.migrateIfNeeded(StoreUpgrader.java:163)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.kernel.impl.storemigration.StoreMigrationTool.run(StoreMigrationTool.java:90)
>  
> ~[neo4j-kernel-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.server.preflight.PerformUpgradeIfNecessary.run(PerformUpgradeIfNecessary.java:81)
>  
> ~[neo4j-server-2.2.0-RC01.jar:2.2.0-RC01]
> at org.neo4j.server.preflight.PreFlightTasks.run(PreFlightTasks.java:71) 
> [neo4j-server-2.2.0-RC01.jar:2.2.0-RC01]
> at 
> org.neo4j.server.AbstractNeoServer.runPreflightTasks(AbstractNeoServer.java:387)
>  
> [neo4j-server-2.2.0-RC01.jar:2.2.0-RC01]
> at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:195) 
> [neo4j-server-2.2.0-RC01.jar:2.2.0-RC01]
> at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:117) 
> [neo4j-server-2.2.0-RC01.jar:2.2.0-RC01]
> at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:69) 
> [neo4j-server-2.2.0-RC01.jar:2.2.0-RC01]
> 2015-03-09 21:19:18.792+ INFO  [o.n.s.p.PreFlightTasks]: Unable to 
> upgrade database
> 2015-03-09 21:19:18.794+ ERROR [o.n.s.e.EnterpriseBootstrapper]: 
> Failed to start Neo Server on port [7474]
> org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: 
> Startup failed due to preflight task [class 
> org.neo4j.server.preflight.PerformUpgradeIfNecessary]: Unabl

Re: [Neo4j] Re: Import tool - stillExecuting errormessage

2014-12-05 Thread Mattias Persson
Thanks for reporting. This was a bug in encoding collision handling and has 
now been fixed. Any next release of 2.2 version will have this fix in it.

On Friday, December 5, 2014 1:50:06 PM UTC+1, Rene Rath wrote:
>
> Problem arises as soon as an ID exceeds the length of 128 characters. Up 
> to 128 chars, it seems to work fine.
>
> 2014-12-05 13:05 GMT+01:00 Rene Rath  >:
>
>> ... and same behaviour using the other argument syntax.
>>
>> ./neo4j-import --into /media/data/neo4j/test.db --nodes 
>> "/Users/d06/tmp/neo4j_2.2/products_head2.csv 
>> /Users/d06/tmp/neo4j_2.2/brands_head2.csv" --relationships 
>> /Users/d06/tmp/neo4j_2.2/products2brands_head2.csv --id-type STRING 
>> --stacktrace 
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Neo4j" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/neo4j/b1T66sf8oyY/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> neo4j+un...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] NotNotFoundException race condition

2014-11-28 Thread Mattias Persson
Thanks for reporting, I'll try to reproduce and get my head around why this
odd behaviour exists.

On Tue, Nov 25, 2014 at 10:52 PM, Clark Richey 
wrote:

> Correct. We have 'fixed' this issue by registering with the transaction
> notification handler and receiving notifications for transactions being
> committed as opposed to trying to load the node by id to determine if it is
> really available.
>
>
> Sent from my iPhone
>
> On Nov 25, 2014, at 16:45, Mattias Persson 
> wrote:
>
> So may I sum this up as you're temporarily seeing the created node, in
> this manner (assuming we have a nodeIsVisible() returning boolean:
>
> false
> false
> ...
> false
> true
> false
> true
> true
> ...
>
> and you only see this behaviour in enterprise, not community. Is that
> correct?
>
>
> On Sat, Nov 22, 2014 at 5:18 PM, Clark Richey  wrote:
>
>>  All,
>> We have a highly concurrent custom bulk loader. When we run it using the
>> community version of Neo4j 2.1.5 it executes flawlessly. When we run it
>> against the enterprise version we see a race condition happening.
>>
>> Here is a simplified version of the workflow and what is happening:
>> During our load process we create a transaction within the scope of a
>> thread and within that transaction we create a node. We store the node id
>> in an AtomicLong and pass that to another thread.
>>
>> There is another thread running (in its own transaction) that receives
>> the AtomicLong from the thread the created the aforementioned node. It is
>> waiting on that node to get created because it has another node that needs
>> to create a relationship to that node. Because we know that the previous
>> thread may not have committed the transaction in which this node was
>> created (the thread is creating many nodes) we first perform a getNodeById
>> to see if the node is available. If it isn’t, we essentially throw it back
>> and come back to it later in the cycle. If the call to getNodeById
>> *doesn’t* throw a NodeNotFoundException, then we know the transaction
>> has been committed. Slightly later within this same thread we now attempt
>> to retrieve that node by its id and we now get a NodeNotFoundException.
>>
>> If we pause the loader at this point, or even wait for it to finish, and
>> then use the neo shell to check for the existence of the node we see that
>> it exists. It wasn’t deleted (there are no deletions happening in this
>> process anyways).
>>
>> Can someone PLEASE explain what is happening here?
>>
>>   <#149e8f141567eeb1_149d84c50d34f170_>   Clark
>> Richey: Chief Technology Officer   e  cl...@factgem.com  p  240.252.7507
>>
>> This message and any included attachments are property of
>> FactGem and its affiliates, and are intended only for the addressee(s). The
>> information contained herein may include trade secrets or privileged or
>> otherwise confidential information. Unauthorized review, forwarding,
>> printing, copying, distributing, or using such information is strictly
>> prohibited and may be unlawful. If you received this message in error, or
>> have reason to believe you are not authorized to receive it, please
>> promptly delete this message and notify the sender by e-mail. Thank you.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Mattias Persson
> Neo4j Hacker at Neo Technology
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] NotNotFoundException race condition

2014-11-25 Thread Mattias Persson
So may I sum this up as you're temporarily seeing the created node, in this
manner (assuming we have a nodeIsVisible() returning boolean:

false
false
...
false
true
false
true
true
...

and you only see this behaviour in enterprise, not community. Is that
correct?


On Sat, Nov 22, 2014 at 5:18 PM, Clark Richey  wrote:

>  All,
> We have a highly concurrent custom bulk loader. When we run it using the
> community version of Neo4j 2.1.5 it executes flawlessly. When we run it
> against the enterprise version we see a race condition happening.
>
> Here is a simplified version of the workflow and what is happening:
> During our load process we create a transaction within the scope of a
> thread and within that transaction we create a node. We store the node id
> in an AtomicLong and pass that to another thread.
>
> There is another thread running (in its own transaction) that receives the
> AtomicLong from the thread the created the aforementioned node. It is
> waiting on that node to get created because it has another node that needs
> to create a relationship to that node. Because we know that the previous
> thread may not have committed the transaction in which this node was
> created (the thread is creating many nodes) we first perform a getNodeById
> to see if the node is available. If it isn’t, we essentially throw it back
> and come back to it later in the cycle. If the call to getNodeById
> *doesn’t* throw a NodeNotFoundException, then we know the transaction has
> been committed. Slightly later within this same thread we now attempt to
> retrieve that node by its id and we now get a NodeNotFoundException.
>
> If we pause the loader at this point, or even wait for it to finish, and
> then use the neo shell to check for the existence of the node we see that
> it exists. It wasn’t deleted (there are no deletions happening in this
> process anyways).
>
> Can someone PLEASE explain what is happening here?
>
>  <#149d84c50d34f170_>   Clark Richey: Chief Technology Officer   e
> cl...@factgem.com  p  240.252.7507
>
> This message and any included attachments are property of FactGem
> and its affiliates, and are intended only for the addressee(s). The
> information contained herein may include trade secrets or privileged or
> otherwise confidential information. Unauthorized review, forwarding,
> printing, copying, distributing, or using such information is strictly
> prohibited and may be unlawful. If you received this message in error, or
> have reason to believe you are not authorized to receive it, please
> promptly delete this message and notify the sender by e-mail. Thank you.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Why are "previous pointer" in relationship store necessary, and in what situation are they useful?

2014-10-27 Thread Mattias Persson
It's for deleting a relationship from the chain, to reroute the pointers of
the previous records to the next records. Inserted relationships are added
at the start of the chain because that´s a pointer we always know about,
when looking at the node.

On Sun, Oct 26, 2014 at 4:20 PM, jer  wrote:

> Hi all,
>
> I am going through the way to do traversals in Neo4j, and just come up
> with a question in relationship store. They are: 33 bytes:1in-use-byte,4
> bytes: fistNode,4secondNode, 4  bytes: relationshipType, 4  bytes: start
> node previous relationship(SP) , 4  bytes: start node next
> relationship(SN), 4  bytes: end node previous relationship(EP), 4  bytes: end
> node next relationship(EN). My problem is, why are two fields for previous
> relationship(SP and EP) useful at all?
>
> The way node store file is organized such that only the index of first
> property and first relationship is stored. When I do paper simulations for
> traversal, I could see some pattern happen, eg. Node -> First
> Relaitonship.SN(start next) -> Second Relationship.SN(start next) -> Third
> Relationship.EN(end next) ... and so forth. The point I'm trying to make
> here is that, our traversal target can either be organized as the start or
> end node of the relationship, thus when going through the traversal, it
> make sense to see either SN or EN pointer in relationship being fetched.
> However, it seems to me that existence of previous pointers (SP and EP)
> does not seem to be useful in traversal? There doesn't seem to be a
> situation where you need to go back in the traversal? Even if we want to
> insert a new relationship between the start and the end node, we can just
> append another relationship to the end of chain. Would you mind providing
> me with one case that SP and EP indexes are useful?
>
> Yours Sincerely,
> Jer
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] neo4j broken/corrupted after ungraceful shutdown

2014-10-01 Thread Mattias Persson
Was the store upgraded from a previous version of neo4j?

Sorry, without the store I can't diagnose what went wrong. The method of
copying over the healthy data to a new store is, I believe, the best way to
go about it. Fact is that these problems should never occur normally, and
every known occasion often results in some kind of bug fix to improve neo4j
to not produce the same inconsistency again.

I hope you can recover and continue.

Best,
Mattias

On Tue, Sep 30, 2014 at 3:07 PM, Ronen Ness  wrote:

> hi Mattias!
>
> I'm using neo4j community 2.1.3, and unfortunately I can't share the
> database (contains sensitive data).
> is there anything else I can send you?
>
> to be honest I'm less worried about why this happens, I'm more interested
> in how do I fix those broken nodes since I'm unable to delete them.
> I got a script on stackoverflow that suppose to copy the entire datastore
> without the broken nodes, but that solution seems very heavy and not
> practical if this problem ever happens on productions.
>
> any ideas?
> thanks :)
>
> On Monday, September 29, 2014 2:34:40 PM UTC, Mattias Persson wrote:
>>
>> Which version are you using?
>>
>> Have you upgraded version recently?
>>
>> Are you comfortable sending your zipped up database (the whole folder) to
>> me, or similar, so that there's a higher chance the cause can be found?
>>
>> On Sun, Sep 28, 2014 at 1:38 PM, Ronen Ness  wrote:
>>
>>> hi all,
>>> I've posted this question
>>> <http://stackoverflow.com/questions/26079306/neo4j-broken-corrupted-after-ungraceful-shutdown>
>>>  on
>>> SO but I'm not getting too many answers there so I wanted to try here as
>>> well, since this looks like a genuine bug in neo4j.
>>>
>>> here's the deal: I have a neo db with approx 2 million relations and
>>> I've made some batch writing using the py2neo.neo4j.WriteBatch() object
>>> (I'm using python, this API <http://book.py2neo.org/en/latest/batches>).
>>> now during the writing operation neo4j crashed (I'm using Neo4j
>>> Community on windows), and when I rebooted and tried to read the nodes
>>> which were in the process of writing I got the following error:
>>>
>>> Error: NodeImpl#1292315 not found. This can be because someone else deleted 
>>> this entity while we were trying to read properties from it, or because of 
>>> concurrent modification of other properties on this entity. The problem 
>>> should be temporary.
>>>
>>>
>>> since I can get the node id from the exception string I tried deleting
>>> some of the broken nodes, but got the following errors:
>>>
>>> File "C:\Python27\lib\site-packages\py2neo\neo4j.py", line 1076, in
>>> _execute raise CustomCypherError(e) InvalidRecordException:
>>> PropertyRecord[2083536] not in use
>>>
>>> so my questions are these:
>>>
>>>
>>>    1. how could this have happened, since I'm using a transaction based
>>>API?
>>>2. how do I fix my db now?
>>>
>>> thanks in advance!
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Mattias Persson
>> Neo4j Hacker at Neo Technology
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] neo4j broken/corrupted after ungraceful shutdown

2014-09-29 Thread Mattias Persson
Which version are you using?

Have you upgraded version recently?

Are you comfortable sending your zipped up database (the whole folder) to
me, or similar, so that there's a higher chance the cause can be found?

On Sun, Sep 28, 2014 at 1:38 PM, Ronen Ness  wrote:

> hi all,
> I've posted this question
> <http://stackoverflow.com/questions/26079306/neo4j-broken-corrupted-after-ungraceful-shutdown>
>  on
> SO but I'm not getting too many answers there so I wanted to try here as
> well, since this looks like a genuine bug in neo4j.
>
> here's the deal: I have a neo db with approx 2 million relations and I've
> made some batch writing using the py2neo.neo4j.WriteBatch() object (I'm
> using python, this API <http://book.py2neo.org/en/latest/batches>).
> now during the writing operation neo4j crashed (I'm using Neo4j Community
> on windows), and when I rebooted and tried to read the nodes which were in
> the process of writing I got the following error:
>
> Error: NodeImpl#1292315 not found. This can be because someone else deleted 
> this entity while we were trying to read properties from it, or because of 
> concurrent modification of other properties on this entity. The problem 
> should be temporary.
>
>
> since I can get the node id from the exception string I tried deleting
> some of the broken nodes, but got the following errors:
>
> File "C:\Python27\lib\site-packages\py2neo\neo4j.py", line 1076, in
> _execute raise CustomCypherError(e) InvalidRecordException: PropertyRecord
> [2083536] not in use
>
> so my questions are these:
>
>
>1. how could this have happened, since I'm using a transaction based
>API?
>2. how do I fix my db now?
>
> thanks in advance!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] What should the behavior be for shortest path from node to itself - BUG

2014-09-18 Thread Mattias Persson
I think it makes sense to return a path consisting of just the one node
that is start/end. Although I don't know if A* has some standard result for
this scenario.

Great to hear that you'd like to have a look and dig around yourself!
Please ask more questions in this thread if you'd like to get hints about
code navigation or the likes.

On Thu, Sep 18, 2014 at 10:15 AM, Jatin Puri  wrote:

> Hi Mattias,
>
> Thanks for response. I will probably try patching it and send pull
> request. Also this will get me more involved more in it :)
> About the behavior, what should it be? `null` or a path starting from node
> to itself? I couldn't find relevant documentation for it
>
> Jatin
>
> On Thu, Sep 18, 2014 at 1:21 PM, Mattias Persson <
> matt...@neotechnology.com> wrote:
>
>> Yup looks like a bug to me. I'm one of the authors of that algo
>> implementation, so I'll see if I can have a look at it soon.
>>
>> On Tue, Sep 16, 2014 at 7:17 PM, Jatin Puri  wrote:
>>
>>> There is a bug <https://github.com/neo4j/neo4j/issues/2987> in
>>> GraphAlgoFactory (trivial but never the less irritating). Basically if you
>>> try finding a single shortest path from a node to itself, it gives
>>> following behavior:
>>>
>>> Using  `GraphAlgoFactory.astar`, it throws:
>>> org.neo4j.graphdb.NotFoundException: Relationship -1 not found
>>>
>>> Using `GraphAlgoFactory.dijskstra`, it returns:
>>> A Path starting (WeightedPath#startNode) from the node and ending
>>> (WeightedPath#endNode) at itself but with no relationship between them.
>>>
>>> I looked at the source and found the bug in each and was rectifying it.
>>> But I am not sure what the behavior should be.
>>>
>>> Documentation
>>> <https://github.com/neo4j/neo4j/blob/master/community/graph-algo/src/main/java/org/neo4j/graphalgo/PathFinder.java>
>>>  says
>>> that it should return null if no path is found. But for a path from a node
>>> to itself, should we assume it as a self-loop with no weight, given there
>>> is no explicit relationship between node to itself? Or is the behavior of
>>> `dijkstra` correct? (I think its wrong)
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to neo4j+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Mattias Persson
>> Neo4j Hacker at Neo Technology
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Neo4j" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/neo4j/5B3BLyRR_ww/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> neo4j+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Jatin Puri
> http://jatinpuri.com <http://www.jatinpuri.com>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: Nodes not created

2014-09-18 Thread Mattias Persson
Please call tx.close() where the actual commit happens. tx.success() just
marks it as successful. And make sure tx.close() is called in a finally
block, or that the Transaction is a resource in a try-with-resource block.

On Thu, Sep 18, 2014 at 11:19 AM, Sukaant Chaudhary <
sukaant.chaudh...@gmail.com> wrote:

> It is not giving any error also, but nodes are not created in the db.
> Please help if anyone have any idea about this.
>
> -Sukaant Chaudhary
> <http://in.linkedin.com/pub/sukaant-chaudhary/33/ba8/479>
>
> On Thu, Sep 18, 2014 at 11:43 AM, Sukaant Chaudhary <
> sukaant.chaudh...@gmail.com> wrote:
>
>> Hi,
>> I'm using the following code. Please suggest why it is not creating the
>> nodes.
>>
>> GraphDatabaseService db = new GraphDatabaseFactory()
>> .newEmbeddedDatabase(NEO4J_DB_PATH);
>> ExecutionEngine engine = new ExecutionEngine(db);
>> Transaction transaction = db.beginTx();
>> ExecutionResult result;
>> String query = "MERGE (ad:Ad {name: {name}, date: {date}}) ON
>> MATCH SET ad.reach = ad.reach + {reach}, ad.totalViewTime =
>> ad.totalViewTime + {totalViewTime} ON CREATE SET ad.reach = {reach},
>> ad.totalViewTime = {totalViewTime} RETURN ad";
>> Map paramValues = new HashMap> Object>();
>> paramValues.put("name", adNodePojo.getName());
>> paramValues.put("date", adNodePojo.getDate());
>> paramValues.put("reach", adNodePojo.getReach());
>> paramValues.put("totalViewTime",
>> adNodePojo.getTotalViewTime());
>>
>> result = engine.execute(query, paramValues);
>>
>> Iterator column =
>> result.columnAs(AD_NODE_PROPERTY_NAME);
>>
>> for (Node node : IteratorUtil.asIterable(column)) {
>> System.out.println("Node: " + node);
>> System.out.println("Name: " + ": " +
>> node.getProperty("name"));
>> System.out.println("Date: " + ": " +
>> node.getProperty("date"));
>> System.out
>> .println("Reach: " + ": " +
>> node.getProperty("reach"));
>> System.out.println("View: " + ": "
>> + node.getProperty("totalViewTime"));
>> }
>> transaction.success();
>>
>>
>> -Sukaant Chaudhary
>> <http://in.linkedin.com/pub/sukaant-chaudhary/33/ba8/479>
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Uniqueness Constraint Being Violated, Neo4J 2.1.3 Bug?

2014-09-18 Thread Mattias Persson
I feel that MERGE or no MERGE, neo4j should always keep this constraint. So
to me it absolutely feels like a bug where neo4j fails to detect the
constraint violation. I'd love to try and reproduce to see where the
problem is.

On Wed, Sep 17, 2014 at 3:31 PM, Saad Mufti  wrote:

> I don't understand, of what use is the uniqueness constraint then if it
> can be violated so easily if you use the CREATE cypher command?
>
> MERGE will not work for us because we are trying to create two logically
> separate nodes while ensuring that they have unique segment id's, and want
> the constraint to throw an error if we happen to violate the uniqueness
> constraint. Is the Neo4J constraint not useful for that use case?
>
> Thanks.
>
> 
> Saad
>
> On Tuesday, September 16, 2014 5:57:00 PM UTC-4, Michael Hunger wrote:
>>
>> MERGE is the only cypher operation that guarantees uniqueness and also
>> takes the necessary machine- and cluster-wide locks to assure that.
>>
>> Cheers, Michael
>>
>> On Tue, Sep 16, 2014 at 11:50 PM, Saad Mufti  wrote:
>>
>>> We're already writing to the master.
>>>
>>> What do you mean by "with appropriate locks"? We're using Cypher over
>>> REST hitting the transactional endpoints as documented at:
>>>
>>> http://docs.neo4j.org/chunked/stable/rest-api-transactional.html
>>>
>>> I don't see anything documented there to allow obtaining of any kind of
>>> lock over the REST API.
>>>
>>> We are not using MERGE, will use that and update this thread.
>>>
>>> Thanks.
>>>
>>> 
>>> Saad
>>>
>>>
>>> On Tuesday, September 16, 2014 5:31:27 PM UTC-4, Michael Hunger wrote:
>>>>
>>>> Also I recommend that you focus writing to the master and not the
>>>> slaves for higher performance.
>>>>
>>>> Feel free to raise a support ticket with our Neo Technology customer
>>>> account when the issue persists with MERGE
>>>>
>>>> On Tue, Sep 16, 2014 at 11:29 PM, Michael Hunger <
>>>> michael...@neotechnology.com> wrote:
>>>>
>>>>> To assure uniqueness across multiple threads and a cluster (with
>>>>> appropriate locks, please use MERGE)
>>>>>
>>>>> MERGE (n:SEGMENT {segmentId : 110484}) ON CREATE SET n.name = "name"
>>>>> , ;
>>>>>
>>>>>
>>>>> On Tue, Sep 16, 2014 at 11:15 PM, Saad Mufti 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We have a Neo4J 2.1.3 database and we have a uniqueness constraint
>>>>>> that was created as follows:
>>>>>>
>>>>>> CREATE CONSTRAINT ON (segment:SEGMENT) ASSERT segment.segmentId IS
>>>>>> UNIQUE
>>>>>>
>>>>>> When we test this from the browser, it works fine in detecting
>>>>>> violations, e.g:
>>>>>>
>>>>>> CREATE (n:SEGMENT {name : "duplicate", segmentId : 110484}) RETURN n
>>>>>>
>>>>>> results in
>>>>>>
>>>>>> Node 589 already exists with label SEGMENT and property 
>>>>>> "segmentId"=[110484]
>>>>>>
>>>>>>  Neo.ClientError.Schema.ConstraintViolation
>>>>>>
>>>>>> which is fine.
>>>>>>
>>>>>> We have a load tester setup with  3 machines and multiple threads per
>>>>>> box using Cypher over REST talking to Neo4J and using the transactional
>>>>>> endpoints to do creates similar to above (but of course many more
>>>>>> properties relevant to our app), and always writing to the Neo4J master 
>>>>>> in
>>>>>> an HA setup.
>>>>>>
>>>>>> We can reliably reproduce in that setup multiple violations of the
>>>>>> uniqueness constraint that are NOT caught by Neo4J, they execute without
>>>>>> error and in the resulting db we can see multiple nodes with the SEGMENT
>>>>>> label and the same value for the segmentId property (we are intentionally
>>>>>> generating duplicate segmentId values for our test).
>>>>>>
>>>>>> Anyone else run into the same issue? Is this a Neo4J bug?
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> -
>>>>>> Saad
>>>>>>
>>>>>>  --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Neo4j" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to neo4j+un...@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
>>>>>
>>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] What should the behavior be for shortest path from node to itself - BUG

2014-09-18 Thread Mattias Persson
Yup looks like a bug to me. I'm one of the authors of that algo
implementation, so I'll see if I can have a look at it soon.

On Tue, Sep 16, 2014 at 7:17 PM, Jatin Puri  wrote:

> There is a bug <https://github.com/neo4j/neo4j/issues/2987> in
> GraphAlgoFactory (trivial but never the less irritating). Basically if you
> try finding a single shortest path from a node to itself, it gives
> following behavior:
>
> Using  `GraphAlgoFactory.astar`, it throws:
> org.neo4j.graphdb.NotFoundException: Relationship -1 not found
>
> Using `GraphAlgoFactory.dijskstra`, it returns:
> A Path starting (WeightedPath#startNode) from the node and ending
> (WeightedPath#endNode) at itself but with no relationship between them.
>
> I looked at the source and found the bug in each and was rectifying it.
> But I am not sure what the behavior should be.
>
> Documentation
> <https://github.com/neo4j/neo4j/blob/master/community/graph-algo/src/main/java/org/neo4j/graphalgo/PathFinder.java>
>  says
> that it should return null if no path is found. But for a path from a node
> to itself, should we assume it as a self-loop with no weight, given there
> is no explicit relationship between node to itself? Or is the behavior of
> `dijkstra` correct? (I think its wrong)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] batch import 2.0 > avoiding duplicate relationships?

2014-08-19 Thread Mattias Persson
The batch inserter does not use cypher queries to insert data so it's not
that straight forward. And to introduce such a check would severely affect
performance and only be applicable in some use cases. It's better if the
code that uses the batch inserter does these checks for your specific use
case.


On Mon, Aug 18, 2014 at 9:10 PM, gg4u  wrote:

> hello, someone could tell me which file /where to modify batch importer so
> that to create relationships like:
>
> Merge (a)-[:REL]-(b)
> On create set r.weight = 123
>
> ?
>
> thank you!
>
>
> Il giorno sabato 16 agosto 2014 16:34:52 UTC+2, gg4u ha scritto:
>
>> Hi Mattias,
>>
>> I see.
>> So how the relationships and nodes are created: using *create *or *merge*
>>  ?
>>
>> I think merge would solve the issue for a check on duplication of
>> relationships,
>> especially if the directions of connection could be specified or
>> unspecified.
>>
>> Java is not my thing, maybe someone can appoint for where to change the
>> query in the batch importer to avoid duplication with smtg like:
>>
>> merge a-[r]-b
>>
>> ?
>>
>> Il giorno venerdì 15 agosto 2014 14:03:16 UTC+2, Mattias Persson ha
>> scritto:
>>>
>>> The batch inserter
>>> <http://docs.neo4j.org/chunked/2.1.3/javadocs/org/neo4j/unsafe/batchinsert/BatchInserter.html>
>>> does no such checks, no
>>>
>>>
>>> On Thu, Aug 14, 2014 at 6:54 PM, gg4u  wrote:
>>>
>>>> Hi,
>>>>
>>>> a quick note on the batch importer:
>>>>
>>>> does  it import the relationships with an equivalent *create
>>>> ()-[]->() *or *merge ()-[]-()* ?
>>>>
>>>> In order to reduce the size of the graph, I would like to avoid to have
>>>> duplicate relationships, in the sense that if a rel
>>>> a-[]-b exist, that is equivalent to b-[]-a
>>>> and ignore the latter.
>>>>
>>>> In case of a weighted graph, does merge ignore b-[r]-a if a-[r]-b exist
>>>> but r
>>>> has different weight?
>>>>
>>>> thank you !
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Neo4j" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to neo4j+un...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>>
>>> --
>>> Mattias Persson
>>> Neo4j Hacker at Neo Technology
>>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] batch import 2.0 > avoiding duplicate relationships?

2014-08-15 Thread Mattias Persson
The batch inserter
<http://docs.neo4j.org/chunked/2.1.3/javadocs/org/neo4j/unsafe/batchinsert/BatchInserter.html>
does no such checks, no


On Thu, Aug 14, 2014 at 6:54 PM, gg4u  wrote:

> Hi,
>
> a quick note on the batch importer:
>
> does  it import the relationships with an equivalent *create ()-[]->() *
> or *merge ()-[]-()* ?
>
> In order to reduce the size of the graph, I would like to avoid to have
> duplicate relationships, in the sense that if a rel
> a-[]-b exist, that is equivalent to b-[]-a
> and ignore the latter.
>
> In case of a weighted graph, does merge ignore b-[r]-a if a-[r]-b exist
> but r
> has different weight?
>
> thank you !
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] (Michael)-[:ON]->(Vacation)

2014-08-04 Thread Mattias Persson
Have a nice well-deserved vacation!


On Mon, Aug 4, 2014 at 12:41 PM, Peter Neubauer  wrote:

> Michael,
> have a great time with the family, too. I hipe not to see you around here
> ;)
>
> /peter
>
>
> G:  neubauer.peter
> S:  peter.neubauer
> P:  +46 704 106975
> L:   http://www.linkedin.com/in/neubauer
> T:   @peterneubauer <http://twitter.com/peterneubauer>
>
> Open Data- @mapillary <http://mapillary.com>
> Open Source - @neo4j <http://neo4j.org>
> Open Future  - @coderdojo <http://malmo.coderdojo.se>
>
>
> On Mon, Aug 4, 2014 at 12:32 PM, Michael Hunger <
> michael.hun...@neotechnology.com> wrote:
>
>> Just a heads up,
>>
>> I'm on vacation most of August, that means I'll work a bit less and won't
>> be answering google group e-mails and stackoverflow
>> <http://stackoverflow.com/questions/tagged/neo4j> questions that quickly
>> (or at all).
>>
>> I'd love if any of you could continue to chime in during that time and
>> help us keep up the good community vibes. Thanks so much for doing so
>> already.
>>
>> Have a great summertime,
>>
>> Cheers,
>>
>> Michael
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Upgrade from 2.0.3 to 2.1.3 ==> Starting Neo4j Server failed: ... Unable to upgrade database

2014-08-04 Thread Mattias Persson
Perfect, would be great if you could send some link to me and possibly
Michael directly.


On Mon, Aug 4, 2014 at 10:01 AM, AJ NOURI  wrote:

> Hi Michael,
> Thanks for your prompt reply. Do you mean the compressed content of 
> default.graphdb
> directory? it is 60 MB.
>
>
> AJ
>
>
> 2014-08-04 8:35 GMT+02:00 Mattias Persson :
>
>> That looks awfully weird, I can't see how that NPE comes to be. Your
>> database would help immensly.
>>
>>
>> On Sun, Aug 3, 2014 at 2:01 PM, Michael Hunger <
>> michael.hun...@neotechnology.com> wrote:
>>
>>> Hi,
>>>
>>> Sorry to hear that, I take it up with our engineering team.
>>>
>>>  Would you be able to share your database (pre upgrade and post upgrade)
>>> ?
>>>
>>> Please continue until then with 2.0.3
>>>
>>> Thanks so much,
>>>
>>> Michael
>>>
>>>
>>>
>>> On Sun, Aug 3, 2014 at 12:16 PM, AJ NOURI  wrote:
>>>
>>>> The last working version was* 2.0.3*. I cannot upgrade to any higher
>>>> version, I'am getting the same error message:
>>>>
>>>> Starting Neo4j Server failed: Startup failed due to preflight task [
>>>>> class org.neo4j.server.preflight.PerformUpgradeIfNecessary]: Unable
>>>>> to upgrade database
>>>>>
>>>>>
>>>>
>>>> Though, I followed instructions in
>>>> http://docs.neo4j.org/chunked/snapshot/deployment-upgrading.html#deployment-upgrading-two-zero
>>>> and have changed  *"allow_store_upgrade=true"* inside
>>>> *neo4j.properties* to allow upgrade.
>>>>
>>>>
>>>> -
>>>>
>>>> Here is the content of *console.log*
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *12:04:35.475 [AWT-EventQueue-0] INFO
>>>>  org.neo4j.server.CommunityNeoServer - Setting startup timeout to: 12ms
>>>> based on -1 12:04:35.702 [AWT-EventQueue-0] INFO
>>>>  o.n.s.p.PerformUpgradeIfNecessary - Failed to start Neo4j with an older
>>>> data store version. To enable automatic upgrade, please set configuration
>>>> parameter "allow_store_upgrade=true" Failed to start Neo4j with an older
>>>> data store version. To enable automatic upgrade, please set configuration
>>>> parameter "allow_store_upgrade=true"12:04:35.703 [AWT-EventQueue-0] INFO
>>>>  o.n.server.preflight.PreFlightTasks - Failed to start Neo4j with an older
>>>> data store version. To enable automatic upgrade, please set configuration
>>>> parameter "allow_store_upgrade=true" 12:05:32.729 [AWT-EventQueue-0] INFO
>>>>  org.neo4j.server.CommunityNeoServer - Setting startup timeout to: 12ms
>>>> based on -112:05:32.847 [AWT-EventQueue-0] ERROR
>>>> o.n.s.p.Perform

Re: [Neo4j] Upgrade from 2.0.3 to 2.1.3 ==> Starting Neo4j Server failed: ... Unable to upgrade database

2014-08-03 Thread Mattias Persson
toreMigrationTool.run(StoreMigrationTool.java:86)
>> ~[neo4j-desktop-2.1.3.jar:2.1.3] at
>> org.neo4j.server.preflight.PerformUpgradeIfNecessary.run(PerformUpgradeIfNecessary.java:84)
>> ~[neo4j-desktop-2.1.3.jar:2.1.3] at
>> org.neo4j.server.preflight.PreFlightTasks.run(PreFlightTasks.java:71)
>> [neo4j-desktop-2.1.3.jar:2.1.3] at
>> org.neo4j.server.AbstractNeoServer.runPreflightTasks(AbstractNeoServer.java:357)
>> [neo4j-desktop-2.1.3.jar:2.1.3] at
>> org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:154)
>> [neo4j-desktop-2.1.3.jar:2.1.3] at
>> org.neo4j.desktop.runtime.DatabaseActions.start(DatabaseActions.java:68)
>> [neo4j-desktop-2.1.3.jar:2.1.3] at
>> org.neo4j.desktop.ui.StartDatabaseActionListener$1.run(StartDatabaseActionListener.java:61)
>> [neo4j-desktop-2.1.3.jar:2.1.3] at
>> java.awt.event.InvocationEvent.dispatch(Unknown Source) [na:1.7.0_51] at
>> java.awt.EventQueue.dispatchEventImpl(Unknown Source) [na:1.7.0_51] at
>> java.awt.EventQueue.access$200(Unknown Source) [na:1.7.0_51] at
>> java.awt.EventQueue$3.run(Unknown Source) [na:1.7.0_51] at
>> java.awt.EventQueue$3.run(Unknown Source) [na:1.7.0_51] at
>> java.security.AccessController.doPrivileged(Native Method) [na:1.7.0_51] at
>> java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
>> [na:1.7.0_51] at java.awt.EventQueue.dispatchEvent(Unknown Source)
>> [na:1.7.0_51] at
>> java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
>> [na:1.7.0_51] at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown
>> Source) [na:1.7.0_51] at
>> java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
>> [na:1.7.0_51] at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
>> [na:1.7.0_51] at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
>> [na:1.7.0_51] at java.awt.EventDispatchThread.run(Unknown Source)
>> [na:1.7.0_51] 12:10:05.747 [AWT-EventQueue-0] INFO
>>  o.n.server.preflight.PreFlightTasks - Unable to upgrade database*
>>
>>
>>   --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Do I have to split relations of a node into domains?

2014-08-01 Thread Mattias Persson
The two approaches are essentially them same, although in 2.1 the "meta
node" benefits are built into the store format.

If you're querying relationships that are in a minority group
(type+direction) there's a good benefit, but if you query relationships in
a majority group, i.e. the most common type+direction, the benefit is not
as good.


On Sun, Jul 20, 2014 at 1:46 AM, Frandro  wrote:

> In my case, as the edges of a node grow the performance becomes worse.
> My use case includes traversing all neighbor nodes and their neighbor
> nodes. But the problem is that their relations of two types are growing.
>
> I've read the following board.
>
> https://groups.google.com/forum/#!searchin/neo4j/performance/neo4j/g63fTmPM4GE/vdSy5whsWgoJ
>
> There's a comment that recommends creating a meta node to have the most
> edges of the node.
> Another one is saying the problem will be mitigated in Neo4j 2.1.
>
> Any helpful comments will be appreciated.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Multithreading over a read-only Graph Database

2014-07-03 Thread Mattias Persson
Could you elaborate a bit on how you query?

On another note I know that the upcoming 2.2 will aleviate a bunch of
contention points around transactions.
Den 2 jul 2014 15:08 skrev "ashish jindal" :

> Hi,
> I am using neo4j embedded 2.1.2 . My use case is only read operations over
> a graph database which i create once initially. Graph contains indexes over
> nodes and relationships. Read operations include iterating over resources,
> queries on indexes and graph traversals.
> So the issue i am facing is : read operations seem inefficient over
> multithreading.
> e.g.
> A read operation ( iterating over all nodes ) take about 300ms in single
> thread but in multithreading it seems to wait until 1 thread is finished,
> so 100 parallel threads take about 30 secs, which is as good as sequential .
> This may be due to the way i am using transaction . I want to know what is
> the best way to use transactions in my use case.
> Can somebody help me out.
>
> Thanks,
> Ashish
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] neo4j upgrade, loss of data!

2014-06-30 Thread Mattias Persson
..2014-06-19 08:01:48.923+ INFO  [org.neo4j]: Store
>>>> upgrade 60% complete
>>>>  60%
>>>> ..2014-06-19 08:01:48.923+ INFO  [org.neo4j]: Store
>>>> upgrade 70% complete
>>>>  70%
>>>> ..2014-06-19 08:01:48.923+ INFO  [org.neo4j]: Store
>>>> upgrade 80% complete
>>>>  80%
>>>> ..2014-06-19 08:01:48.924+ INFO  [org.neo4j]: Store
>>>> upgrade 90% complete
>>>>  90%
>>>> ..2014-06-19 08:01:48.924+ INFO  [org.neo4j]: Store
>>>> upgrade 100% complete
>>>>  100%
>>>> 2014-06-19 08:01:48.924+ INFO  [org.neo4j.unsafe.impl.batchimport.
>>>> ParallellBatchImporter] Import completed [TODO import stats]
>>>> 2014-06-19 08:01:48.924+ INFO  [org.neo4j.unsafe.impl.batchimport.
>>>> ParallellBatchImporter] Import completed [TODO import stats]
>>>> Finished upgrade of database store files
>>>> 2014-06-19 08:01:49.578+ INFO  [org.neo4j]: Finished upgrade of
>>>> database store files
>>>> 2014-06-19 08:01:>>>
>>>> ...
>>>
>>>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/s3K4k_9orGQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+unsubscr...@googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: dijkstra bidirectional

2014-06-17 Thread Mattias Persson
Sorry for not replying but time is a scarse resource :( I don't expect my
getting time for this the closest months. Perhaps there are others willing
to help out!

Take care
Best,
Mattias
Den 10 jun 2014 12:06 skrev "Antonio Grimaldi" <
antonio.grimaldim...@gmail.com>:

> Is* org.neo4j.graphalgo.impl.shortestpath.Dijkstra a *Bidirectional
> Dijkstra's implementation???
>
> Il giorno giovedì 8 maggio 2014 17:04:32 UTC+2, Antonio Grimaldi ha
> scritto:
>>
>> Hi,
>> Is there an implementation of Bidirectional Dijkstra?
>>
>> Thanks
>> Antonio
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Question about Relationships number and their influence on algorithms

2014-05-26 Thread Mattias Persson
Hi,

About the number of relationships per node and how that affects things: it
affects the load performance of a node the first time its relationships are
touched after the node being freshly loaded into memory (either first time,
or after eviction from cache). Iterating over all of a certain type, as in
your case, requires all relationships to be loaded for that node. In 2.1
there will be a store format change where only relationships of the
requested type and direction are loaded. Nodes that go over a certain
threshold number of relationships will take advantage of such a
representation on disk, so in your case where a worst case is 4
relationships, or more specifically 1 relationship of each type, there will
no gain by that store format change.

The geo estimate evaluator in neo4j takes into consideration the fact that
the earth aint flat, but if a simpler version works, then by all means :)

I think it's safe to say that the TraversalAStar isn't really experimental
anymore, it's probably a notice left in there by mistake.



On Mon, May 19, 2014 at 2:30 PM, Angelo Immediata wrote:

> Hi Michael
>
> For hot dataset do you mean a dataset stored in memory? Well we tried both
> for in memory dataset and for dataset on the disk
> Well I don't know exactly how many relationships can have a nodethe
> worst case is the each node contains 4 relationship each one with direction
> BOTH
>
>
> 2014-05-19 13:22 GMT+02:00 Michael Hunger <
> michael.hun...@neotechnology.com>:
>
>> Is this for a hot dataset, or one that has to be fetched from disk?
>> How many rels do you usually have per node?
>>
>>
>> On Mon, May 19, 2014 at 9:04 AM, Angelo Immediata wrote:
>>
>>> Hi there
>>>
>>> With my colleague, we are are buillding a route system by using neo4j
>>> 2.0.3; so we are suing A* and Dijkstra algorithms in order to calculate the
>>> shortest path,
>>> I was wondering if the relationships number can affect the algorithm
>>> perfomance. I mean, we have a graph with around 1 million (or more) of
>>> nodes and 50 million of relationships. We have several types of
>>> relationship; specifically we have:
>>>
>>>- relationships for cars: the most of relationships are of this type
>>>- relationships for bikes
>>>- relationships for pedestrian
>>>- relationships for public transports
>>>
>>> When we execute Dijkstra and/or A* we can specify, in our PathExpander,
>>> the type of the relationships we want to consider during the traverser, so,
>>> my sensation is that the relationships number should not affect algorithm
>>> performance since we will sparsely (almost never) consider all the
>>> relationships types. Am I right?
>>>
>>> Thak you
>>> Angelo
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to neo4j+unsubscr...@googlegroups.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Neo4j" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/neo4j/YtOt_rNy9sA/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> neo4j+unsubscr...@googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: GraphAlgoFactory.dijkstra( PathExpander expander, String relationshipPropertyRepresentingCost ) very slow performance

2014-05-14 Thread Mattias Persson
So you're opening a GraphDatabaseService, using a GraphDatabaseFactory for 
doing the queries, right?

Keep in mind that initial runs where nodes and relationships a read into 
memory takes additional time, so what happens if you run the same query 
twice?

Also the memory mapping settings has the pattern:

neostore..mapped_memory

so in your setup you should have:

neostore.nodestore.db.mapped_memory=100M
neostore.relationshipstore.db.mapped_memory=3G
neostore.propertystore.db.mapped_memory=100M
neostore.propertystore.db.strings.mapped_memory=200M
neostore.propertystore.db.arrays.mapped_memory=50M

And an index wouldn't help since no indexes are used during in the dijkstra 
algo.

Den tisdagen den 13:e maj 2014 kl. 17:56:07 UTC+2 skrev Antonio Grimaldi:
>
> Hi,
>
> i'm using neo4j 2.0.3 with embedded DB,in order to build a route system.
>
> I created my graph (with around 1 Million of Nodes, and 50 Million of 
> relationships; ) in this way:
>
>- 
>
>create nodes : 
>
> Label mainNodeLabel = DynamicLabel.label("nodoPrincipale");
> // initialize batchInserter
> BatchInserter inserter = BatchInserters.inserter(neo4jDbPath, config);
> BatchInserterIndexProvider indexProvider = new 
> LuceneBatchInserterIndexProvider(inserter);
> inserter.createDeferredSchemaIndex(mainNodeLabel).on("y").create();
> inserter.createDeferredSchemaIndex(mainNodeLabel).on("x").create();
> BatchInserterIndex osmWayIdPropertyIndex = 
> indexProvider.relationshipIndex("osmWayIdProperties", 
> MapUtil.stringMap("type", "exact"));
> osmWayIdPropertyIndex.setCacheCapacity(OSMAttribute.OSM_WAY_ID_PROPERTY, 
> 10);
>
>  
>
> Map nodeProps = new HashMap();
> double x = ...;
> double y = ...;
> nodeProps.put("y", y);
> nodeProps.put("x", x);
>
> long graphNodeId = inserter.createNode(nodeProps, mainNodeLabel); 
>   
>
>
>
>- 
>
>create relationships between created nodes :
>
> Map relationProps = new HashMap();
> relationProps.put(OSMAttribute.EDGE_LENGTH_PROPERTY, lunghezzaArco);
> long relId = inserter.createRelationship(startNode, endNode, 
> "CAR_MAIN_NODES_RELATION", relationProps);
>
> osmWayIdPropertyIndex.add(relId, relationProps)
>
>
>- Neo4j Configuration : 
>
> nodestore_mapped_memory_size=100M
> relationshipstore_mapped_memory_size=3G
> nodestore_propertystore_mapped_memory_size=100M
> strings_mapped_memory_size=200M
> arrays_mapped_memory_size=50M
>
> When I Calculate shortest path between startNode and endNode with Dijkstra: 
> PathFinder finder = 
> GraphAlgoFactory.dijkstra(PathExpanders.forTypeAndDirection(relationType, 
> Direction.OUTGOING), "edgeLength");
> WeightedPath path = finder.findSinglePath(startNode, endNode);
>
> I have very slow performance... about 68194 millis for a distance of 42km.
> Is there something wrong? 
> Maybe i should index relationship cost property too? (edgeLength)
>
> Thanks
> Antonio
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: dijkstra bidirectional

2014-05-09 Thread Mattias Persson
That is a good start, but I don't think that's enough as some more
coordination between the two sides will have to occur, otherwise the
traverser may stop before finding the best path out there. We'll just have
to read up on bidirectional Dijkstra algorithm and translate that into code.

My guess would be that there would have to be some custom code in
BestFirstSelector or a sub class thereof. And as I'm working on a bit of
other things, I don't have the time just yet to help you. But I'll see if I
can find time to at least read up on the algo a bit.


On Fri, May 9, 2014 at 11:00 AM, Antonio Grimaldi <
antonio.grimaldim...@gmail.com> wrote:

> Hi Mattias,
> thanks for your answer...
> As I'm a newbie, there would be some examples to follow?
>
> I only tried this modification in org.neo4j.graphalgo.impl.path.Dijkstra
> class, but i'm not sure that is enough..
> Can you help me?
>
> Thanks
>   private Traverser traverser( Node start, final Node end, boolean
> forMultiplePaths )
> {
>// return (lastTraverser = TRAVERSAL.expand( expander, stateFactory
> )
>  //   .order( new SelectorFactory( forMultiplePaths,
> costEvaluator ) )
>// .evaluator( Evaluators.includeWhereEndNodeIs( end )
> ).traverse( start ) );
>
> return   (lastTraverser = Traversal.bidirectionalTraversal()
>   .mirroredSides( TRAVERSAL.expand( expander )
>   .order( new SelectorFactory( forMultiplePaths,
> costEvaluator) ) )
>   .traverse( start, end ) );
>
> }
>
>
>
> Il giorno giovedì 8 maggio 2014 17:04:32 UTC+2, Antonio Grimaldi ha
> scritto:
>
>> Hi,
>> Is there an implementation of Bidirectional Dijkstra?
>>
>> Thanks
>> Antonio
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] dijkstra bidirectional

2014-05-09 Thread Mattias Persson
Not in the official neo4j product at least, but with the bi-directional
traversal framework it shouldn't be too hard to write, I suspect.


On Thu, May 8, 2014 at 5:04 PM, Antonio Grimaldi <
antonio.grimaldim...@gmail.com> wrote:

> Hi,
> Is there an implementation of Bidirectional Dijkstra?
>
> Thanks
> Antonio
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson
Neo4j Hacker at Neo Technology

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: Shortest Path Algoritm with cost?

2014-03-24 Thread Mattias Persson
There's a limiter in Dijkstra and A* that will stop the iterator after all
paths with the lowest cost (if there are multiple) have been returned. This
could and probably should be changed to allow for returning paths with
diminishing cost as long as the user of the iterator pulls items.


2014-03-23 14:39 GMT+01:00 Antonio Grimaldi 
:

> Hi Michael,
> thanks for your answer.
> I tried to use WeightedPath path = dijkstraPath.findAllPaths(startNode,
> endNode);
> But always one path is returned, except when there are two or more paths
> with the same cost.
> Instead i would like to found the firts 3 alternative path, with different
> cost.
>
> Antonio Grimaldi
>
>
> Il giorno venerdì 21 marzo 2014 17:26:21 UTC+1, Antonio Grimaldi ha
> scritto:
>
>> Hi,
>> I used the Dijkstra algorithm to compute the path between two nodes in
>> the graph, in this way :
>>
>>
>> CostEvaluator costEvaluator = null;
>> if(costProperty.equalsIgnoreCase(IConstants.EDGE_LENGTH_PROPERTY)){
>>   //In questo caso bisogna eseguire il calcolo del percorso più breve
>> (minore distanza percorsa)
>>   costEvaluator = CommonEvaluators.doubleCostEvaluator( costProperty );
>> }else if(costProperty.equalsIgnoreCase(IConstants.EDGE_SPEED_PROPERTY)){
>>   //In questo caso bisogna eseguire il calcolo del percorso più rapido
>> (minore rapporto distanza/velocità)
>>   costEvaluator = new CostEvaluator() {
>>   @Override
>>   public Double getCost(Relationship relationship, Direction direction) {
>>Double edgeLength = (Double) relationship.getProperty(
>> IConstants.EDGE_LENGTH_PROPERTY);
>>Long edgeSpeed = (Long) relationship.getProperty(
>> IConstants.EDGE_SPEED_PROPERTY);
>>Double cost = edgeLength / edgeSpeed;
>>return cost.doubleValue();
>> }
>>
>> };
>>
>>   }
>> PathFinder dijkstraPath = GraphAlgoFactory.dijkstra(
>> PathExpanders.forTypeAndDirection(relationType, Direction.OUTGOING),
>> costEvaluator);
>> WeightedPath path = dijkstraPath.findSinglePath(startNode, endNode);
>>
>> So, I can calculate the shortest route or the quickest route by 
>> costProperty's
>> value...
>>
>> Now, I need to make the same considerations, whit shortestPath algoritm,
>> because I would to retrieve the first N = 3 paths found.
>>
>> I tried using :
>> PathFinder simplePaths = GraphAlgoFactory.shortestPath(
>> PathExpanders.forTypeAndDirection(relationType, Direction.OUTGOING),
>> 100, 3);
>> Iterable paths = simplePaths.findAllPaths(startNode, endNode);
>>
>> but this not manage my "costProperty".
>>
>> Is there a way to retrieve the first N = 3 paths found, whit an algoritm
>> that manage  "costProperty", like Dijkstra?
>>
>> Thanks
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] first time a query is run it produces different results in 2.1.0-M01

2014-03-19 Thread Mattias Persson
7"   | "J77444"   | 219 |
>>
>> +---+
>>
>> 3 rows
>>
>> 6354 ms
>>
>> neo4j-sh (?)$ match (j1:jurt)-[:HAS_TERM]->(t:Term)<-[:HAS_TERM]-(j2:jurt)
>> where NOT (id(j1)=id(j2)) AND j1.jurt_id = 'J72887' with j1,j2,count(t) as
>> commonterms return j1.jurt_id,j2.jurt_id,commonterms order by
>> commonterms desc limit 3;
>>
>> +---+
>>
>> | j1.jurt_id | j2.jurt_id | commonterms |
>>
>> +---+
>>
>> | "J72887"   | "J70059"   | 227 |
>>
>> | "J72887"   | "J75312"   | 220 |
>>
>> | "J72887"   | "J77444"   | 219 |
>>
>> +---+
>>
>> 3 rows
>>
>> 6108 ms
>>
>>
>> What may cause this  ?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+un...@googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Bug in Node.getRelationships function?

2014-03-18 Thread Mattias Persson
This bug has been found and fixed, so will be included in the next
milestone.


2014-03-17 15:13 GMT+01:00 Mattias Persson :

> Thank you, I was just now able to reproduce this!
>
>
> 2014-03-17 11:28 GMT+01:00 Sotiris Beis :
>
> Hi Mattias,
>>
>> here is some more details:
>>
>> - The function to create the graph
>> public void createGraphForMassiveLoad(String dbPath) {
>> System.out.println("Creating Neo4j Graph Database for massive load . . .
>> .");
>>  Map config = new HashMap();
>> config.put("cache_type", "none");
>>  config.put("use_memory_mapped_buffers", "true");
>> config.put("neostore.nodestore.db.mapped_memory", "200M");
>>  config.put("neostore.relationshipstore.db.mapped_memory", "1000M");
>> config.put("neostore.propertystore.db.mapped_memory", "250M");
>>  config.put("neostore.propertystore.db.strings.mapped_memory", "250M");
>> inserter = BatchInserters.inserter(dbPath, config);
>>  indexProvider = new LuceneBatchInserterIndexProvider(inserter);
>> nodes = indexProvider.nodeIndex("nodes", MapUtil.stringMap("type",
>> "exact"));
>>  }
>>
>> - The code to load the data
>> public void createGraph(String datasetDir) {
>> System.out.println("Loading data in massive mode in Neo4j database . . .
>> .");
>>
>> inserter.createDeferredSchemaIndex(Neo4jGraphDatabase.NODE_LABEL).on("nodeId").create();
>> try {
>> BufferedReader reader = new BufferedReader(new InputStreamReader(new
>> FileInputStream(datasetDir)));
>>  String line;
>> int lineCounter = 1;
>> // Map properties;
>> // IndexHits cache;
>> long srcNode, dstNode;
>> while((line = reader.readLine()) != null) {
>>  if(lineCounter > 4) {
>> String[] parts = line.split("\t");
>>  srcNode = getOrCreate(parts[0]);
>> dstNode = getOrCreate(parts[1]);
>>  inserter.createRelationship(srcNode, dstNode,
>> Neo4jGraphDatabase.RelTypes.SIMILAR, null);
>> }
>> lineCounter++;
>>  }
>> reader.close();
>> }
>> catch (IOException e) {
>>  e.printStackTrace();
>> }
>> nodes.flush();
>> }
>>  private long getOrCreate(String value) {
>> Long id = cache.get(Long.valueOf(value));
>>  if(id == null) {
>> Map properties = MapUtil.map("nodeId", value);
>> id = inserter.createNode(properties, Neo4jGraphDatabase.NODE_LABEL);
>>  cache.put(Long.valueOf(value), id);
>> nodes.add(id, properties);
>> }
>>  return id;
>> }
>>
>> - The dataset I made the tests:
>> http://snap.stanford.edu/data/email-Enron.html
>>
>> Unfortunately the database is not available right now, but it's pretty
>> easy to reproduce with the above code.
>>
>> Thanks,
>> Sotiris
>>
>>
>> Τη Κυριακή, 16 Μαρτίου 2014 4:48:16 μ.μ. UTC+2, ο χρήστης Mattias Persson
>> έγραψε:
>>>
>>> Yup, it's probably a regression introduced in the recent store format
>>> changes. I'd love to track this and be able to reproduce it. How were the
>>> relationships added to the node? Distribution of types/directions and also
>>> in which order they were added. Could you provide detailed information
>>> about that, or provide the database zipped up to me directly (
>>> mat...@neotechnology.com) ?
>>>
>>> Thanks in advance
>>>
>>>
>>> 2014-03-13 12:26 GMT+01:00 Sotiris Beis :
>>>
>>>>  Ok, can you suggest me another temporary solution?
>>>>
>>>>
>>>> On 03/13/2014 01:24 PM, Michael Hunger wrote:
>>>>
>>>> Thanks for the feedback. Could be related to the changes in the store
>>>> format for heavily connected nodes.
>>>>
>>>>  We'll investigate.
>>>>
>>>> Cheers,
>>>>
>>>>  Michael
>>>>
>>>>  
>>>> (michael <http://twitter.com/mesirii>)-[:SUPPORTS]->(*YOU*)-[:USE]->(
>>>> Neo4j <http://neo4j.org>)
>>>> Learn Online <http://neo4j.org/learn/online_course>, 
>>>> Offline<http://www.neo4j.org/events> or
>>>> Read a Book <http://graphdatabases.com> (in Deutsch<http://bit.ly/das-buch>
>>>> )
>>>> We're trading T-shirts for cool Graph Models <http://bit.ly/graphgist>
>>>>
>>>>
>>>>

Re: [Neo4j] Bug in Node.getRelationships function?

2014-03-17 Thread Mattias Persson
Thank you, I was just now able to reproduce this!


2014-03-17 11:28 GMT+01:00 Sotiris Beis :

> Hi Mattias,
>
> here is some more details:
>
> - The function to create the graph
> public void createGraphForMassiveLoad(String dbPath) {
> System.out.println("Creating Neo4j Graph Database for massive load . . .
> .");
> Map config = new HashMap();
> config.put("cache_type", "none");
> config.put("use_memory_mapped_buffers", "true");
> config.put("neostore.nodestore.db.mapped_memory", "200M");
> config.put("neostore.relationshipstore.db.mapped_memory", "1000M");
> config.put("neostore.propertystore.db.mapped_memory", "250M");
> config.put("neostore.propertystore.db.strings.mapped_memory", "250M");
> inserter = BatchInserters.inserter(dbPath, config);
> indexProvider = new LuceneBatchInserterIndexProvider(inserter);
> nodes = indexProvider.nodeIndex("nodes", MapUtil.stringMap("type",
> "exact"));
> }
>
> - The code to load the data
> public void createGraph(String datasetDir) {
> System.out.println("Loading data in massive mode in Neo4j database . . .
> .");
>
> inserter.createDeferredSchemaIndex(Neo4jGraphDatabase.NODE_LABEL).on("nodeId").create();
> try {
> BufferedReader reader = new BufferedReader(new InputStreamReader(new
> FileInputStream(datasetDir)));
> String line;
> int lineCounter = 1;
> // Map properties;
> // IndexHits cache;
> long srcNode, dstNode;
> while((line = reader.readLine()) != null) {
> if(lineCounter > 4) {
> String[] parts = line.split("\t");
>  srcNode = getOrCreate(parts[0]);
> dstNode = getOrCreate(parts[1]);
>  inserter.createRelationship(srcNode, dstNode,
> Neo4jGraphDatabase.RelTypes.SIMILAR, null);
> }
> lineCounter++;
> }
> reader.close();
> }
> catch (IOException e) {
> e.printStackTrace();
> }
> nodes.flush();
> }
>  private long getOrCreate(String value) {
> Long id = cache.get(Long.valueOf(value));
> if(id == null) {
> Map properties = MapUtil.map("nodeId", value);
> id = inserter.createNode(properties, Neo4jGraphDatabase.NODE_LABEL);
> cache.put(Long.valueOf(value), id);
> nodes.add(id, properties);
> }
> return id;
> }
>
> - The dataset I made the tests:
> http://snap.stanford.edu/data/email-Enron.html
>
> Unfortunately the database is not available right now, but it's pretty
> easy to reproduce with the above code.
>
> Thanks,
> Sotiris
>
>
> Τη Κυριακή, 16 Μαρτίου 2014 4:48:16 μ.μ. UTC+2, ο χρήστης Mattias Persson
> έγραψε:
>>
>> Yup, it's probably a regression introduced in the recent store format
>> changes. I'd love to track this and be able to reproduce it. How were the
>> relationships added to the node? Distribution of types/directions and also
>> in which order they were added. Could you provide detailed information
>> about that, or provide the database zipped up to me directly (
>> mat...@neotechnology.com) ?
>>
>> Thanks in advance
>>
>>
>> 2014-03-13 12:26 GMT+01:00 Sotiris Beis :
>>
>>>  Ok, can you suggest me another temporary solution?
>>>
>>>
>>> On 03/13/2014 01:24 PM, Michael Hunger wrote:
>>>
>>> Thanks for the feedback. Could be related to the changes in the store
>>> format for heavily connected nodes.
>>>
>>>  We'll investigate.
>>>
>>> Cheers,
>>>
>>>  Michael
>>>
>>>  
>>> (michael <http://twitter.com/mesirii>)-[:SUPPORTS]->(*YOU*)-[:USE]->(
>>> Neo4j <http://neo4j.org>)
>>> Learn Online <http://neo4j.org/learn/online_course>, 
>>> Offline<http://www.neo4j.org/events> or
>>> Read a Book <http://graphdatabases.com> (in Deutsch<http://bit.ly/das-buch>
>>> )
>>> We're trading T-shirts for cool Graph Models <http://bit.ly/graphgist>
>>>
>>>
>>>
>>>  Am 13.03.2014 um 12:22 schrieb Sotiris Beis :
>>>
>>>  Which version of Neo4j are you using?
>>>
>>> I use neo4j-2.1.0-M01
>>>
>>>  You use  a Set which elminates duplicates. You probably have duplicate
>>> neighbourId's that are only 100 distinct ones.
>>>
>>> That was my first thought, but I cheched it. There are no dublicates.
>>> Why do you think the result is different when I use this test line
>>>
>>> System.out.println(IteratorUtil.count(n.getRelationships(Direction.
>>> O

Re: [Neo4j] Bug in Node.getRelationships function?

2014-03-16 Thread Mattias Persson
e you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/stHamJpQSBk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] cant connect to server shell from remote

2014-02-14 Thread Mattias Persson
Could there be an issue where the server is registered at a different host
than expected, if that machine has multiple IPs? You can specifythe host/IP
to register the shell server at using remote_shell_host configuration for
your database.

Does that help?


2014-02-14 11:02 GMT+01:00 Jonas M :

> Hello,
>
> I am running community server 2.1 on amazon ami (ubuntu instance), amazon
> firewall is opened on 1337 port. Enabled remote shell to server on
>  neo4j.properties:
>
> # Enable shell server so that remote clients can connect via Neo4j shell.
> remote_shell_enabled=true
> # Specify custom shell port (default is 1337).
> remote_shell_port=1337
>
> netstat shows that port is opened:
>
>  netstat -anp | grep 1337
> tcp0  0 0.0.0.0:13370.0.0.0:*
> LISTEN  13211/java
>
>
> I am trying to connect using remote shell from another amazon instance:
>
> ./neo4j-shell -host ip.ip.ip.ip -port 1337
>
> and always getting :
>
>  Connection refused
> java.rmi.ConnectException: Connection refused to host: ip.ip.ip.ip; nested
> exception is:
> java.net.ConnectException: Connection timed out
> at
> sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
> at
> sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216)
> at
> sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
> at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:340)
> at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
> at java.rmi.Naming.lookup(Naming.java:101)
> at
> org.neo4j.shell.impl.RmiLocation.getBoundObject(RmiLocation.java:253)
> at
> org.neo4j.shell.impl.RemoteClient.findRemoteServer(RemoteClient.java:62)
> at org.neo4j.shell.impl.RemoteClient.(RemoteClient.java:55)
> at org.neo4j.shell.impl.RemoteClient.(RemoteClient.java:43)
> at org.neo4j.shell.ShellLobby.newClient(ShellLobby.java:165)
> at org.neo4j.shell.StartClient.startRemote(StartClient.java:297)
> at org.neo4j.shell.StartClient.start(StartClient.java:175)
> at org.neo4j.shell.StartClient.main(StartClient.java:120)
> Caused by: java.net.ConnectException: Connection timed out
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at
> sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:40)
> at
> sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:146)
> at
> sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:613)
> ... 13 more
>
>
> I can telnet to port 1337 from this host, where is problem so port is
> opened? why I cant connect using shell? please help
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Neo4j] Neo4j Server takes very long to start with new DB file

2014-01-10 Thread Mattias Persson
r after 3h of it trying to
>> start. It gets to the point of saying "In just a few seconds, Neo4j will be
>> ready...", on the windows version and the same stage on the linux version.
>> It is at this stage i killed the application after 3h.
>>
>> Any ideas? Does it just take a really long time or is something obviously
>> wrong?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+un...@googlegroups.com.
>>
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Neo4j] Lucene index merge operation optimization?

2014-01-10 Thread Mattias Persson
Hi Tero,

that's probably going to be difficult since, as you said, those lucene
configuration options aren't exposed as neo4j configuration options.

As long as you're trying out different configuration options, would it be
OK to modify source (LuceneDataSource is the place) and rebuild to try out
or would that be difficult for you?


2014/1/2 Tero Paananen 

> We're having an issue where our Neo4j based application starts
> crawling really, really badly during a Lucene index merge.
>
> This is an issue when we're having unusually high write volume. It
> seems as if everyone is applying for a new job after New Years...
>
> We're going to address this issue on multiple fronts, including with
> improvements on hardware and the application code, but I was wondering
> if I can change the Lucene (index merge) configuration options within
> Neo4j?
>
> I couldn't find anything googling the Internets, and a quick browse at
> the Neo4j source code also seemed to indicate the Lucene configuration
> options aren't exposed via Neo4j.
>
> We're running Neo4j in embedded mode and we're still on the 1.8.x branch.
>
> -TPP
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Neo4j] BatchInserter and TimelineIndex [v1.9.4]

2014-01-08 Thread Mattias Persson
I replied to your comment in the manual:

http://docs.neo4j.org/chunked/snapshot/batchinsert.html#comment-1184069289


2014/1/3 Smit Sanghavi 

> Hi,
>
> I am using BatchInserterIndex to ingest a large amount of data to Neo4j
> DB. I intend to add nodes to a TimelineIndex (Lucene) during the batch.
> Now, in the normal way, TimelineIndex takes (node, long) to add in the
> index. It probably is using the key 'timestamp' internally. (Checked in
> LuceneTimeline.java in github)
>
> My problem is that I'm able to insert nodes into the TL index but not able
> to retrieve them using the regular java API. It always returns
> timelineIndex.getFirst() as null.
> I have initialized the indices as below.
>
> *Regular Way Of Access*
> TimelineIndex timelineIndex = new LuceneTimeline(graphDB,
> indexMgr.forNodes("airing-timeline")); //graphDb initialised properly
> earlier.
> timelineIndex.add(node, 123456L);
>
> *Batch Ingestion*
> BatchInserterIndex timelineIndex =
> indexProvider.nodeIndex("airing-timeline", MapUtil.stringMap("type",
> "exact")); //Initialised just like regular way
>
> Map timelineIndexPropertiesMap = new HashMap Object>();
> timelineIndexPropertiesMap.put("timestamp", 123456L); //Checked the
> code of LuceneTimeline.java and found this internal property for timeline
> timelineIndex.query("*:*").size(); // return 0 (zero)
> timelineIndex.add(airing_node_id, timelineIndexPropertiesMap);
> timelineIndex.query("*:*").size(); // return 1 (one)
>
> 
>
> Now, when I'm trying to use, timelineIndex.getFirst() to retrieve data
> added by Batch Inserter, it always returns me null.
> But, nodes added in the regular way on the SAME DB return me proper values.
>
> Where am I going wrong?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Neo4j] Lost in "unable to commit transaction" errors, during move from 1.9.3 -> 2.0.0 (now all reads require transactions)

2014-01-02 Thread Mattias Persson
;> > org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
>>
>> >
>> >  > ... 70 more
>> >  >
>> >  > What are the various causes of this, and how can I troubleshoot
>> > them?
>> >  >
>> >  > This is all code that ran without any problem on 1.9.3 - so I'm
>> >  > thinking I should look into areas of difference there.
>> >  >
>> >  > Sometimes this happens when iterating over the results of
>> > executing a
>> >  > cypher query from java.  Sometimes it happens when I'm using a
>> >  > TraversalDescription I built.
>> >  >
>> >  > Strangely enough, since these are read-only operations, I can
>> > *ignore*
>> >  > the failure exception, and everything seems peachy (the data
>> came
>> > back
>> >  > from the graph database just fine).   I'm just wondering why
>> they're
>> >  > happening.
>> >  >
>> >  > Any suggestions or pointers?
>> >  >
>> >  >
>> >  > --
>> >  > You received this message because you are subscribed to the
>> Google
>> >  > Groups "Neo4j" group.
>> >  > To unsubscribe from this group and stop receiving emails from
>> it,
>> > send
>> >  > an email to neo4j+un...@googlegroups.com .
>> >  > For more options, visit https://groups.google.com/groups/opt_out
>> > <https://groups.google.com/groups/opt_out>.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups "Neo4j" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an email to neo4j+un...@googlegroups.com.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Neo4j] Lost in "unable to commit transaction" errors, during move from 1.9.3 -> 2.0.0 (now all reads require transactions)

2014-01-02 Thread Mattias Persson
Hi David,

unfortunately I think the root cause of the failure to commit is lost in
the thrown exception. You can have a look in messages.log for the cause, or
just attach you message.log here with a rough timestamp when this happened.
When we dig up that root cause we can argue if it's strange and unexpected,
or not :)

Btw, whether anything actually gets committed or not is dictated by the
presence of write operations in there, but the code path in the transaction
manager is the same.


2013/12/31 M. David Allen 

> As I'm updating code for 2.0.0, I'm wrapping a lot of old code that only
> serves to inspect a graph (not update it) in transactions, using the new
> idiom:
>
> try ( Transaction tx = myDb.beginTx() ) {
>accessSomeData();
>tx.finish();
> }
>
> After the try block finishes, I'm getting exceptions of this form:
>
> org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
> at
> org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:134)
> at blah.blah.mycode
>
> Caused by: javax.transaction.RollbackException: Failed to commit,
> transaction rolled back
> at
> org.neo4j.kernel.impl.transaction.TxManager.rollbackCommit(TxManager.java:623)
> at
> org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:402)
> at
> org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:122)
> at
> org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
> ... 70 more
>
> What are the various causes of this, and how can I troubleshoot them?
>
> This is all code that ran without any problem on 1.9.3 - so I'm thinking I
> should look into areas of difference there.
>
> Sometimes this happens when iterating over the results of executing a
> cypher query from java.  Sometimes it happens when I'm using a
> TraversalDescription I built.
>
> Strangely enough, since these are read-only operations, I can *ignore* the
> failure exception, and everything seems peachy (the data came back from the
> graph database just fine).   I'm just wondering why they're happening.
>
> Any suggestions or pointers?
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Neo4j] 100% cpu on one cpu while performaing cypher queries using py2neo

2014-01-02 Thread Mattias Persson
quite a lot of VM GC arguments, where did you get them from?
>>
>> 2013-12-22 16:45:25.640+ INFO  [o.n.k.i.DiagnosticsManager]: VM
>> Arguments: [-XX:+DisableExplicitGC, 
>> -Dorg.neo4j.server.properties=conf/neo4j-server.properties,
>> -Djava.util.logging.config.file=conf/logging.properties,
>> -Dlog4j.configuration=file:conf/log4j.properties,
>> -XX:ParallelGCThreads=48, -XX:+UseParallelOldGC, -XX:+UseNUMA, 
>> -XX:-UseAdaptiveNUMAChunkSizing,
>> -XX:+UseAdaptiveSizePolicy, -XX:+BindGCTaskThreadsToCPUs,
>> -XX:+UseGCTaskAffinity, -XX:-UseLargePages, -XX:-UseCompressedOops,
>> -XX:-ParallelRefProcEnabled, -XX:MaxPermSize=512m, -Xms65536m, -Xmx65536m,
>> -Dneo4j.home=/home/lokesh/code/neo4j-community-2.0.0,
>> -Dneo4j.instance=/home/lokesh/code/neo4j-community-2.0.0,
>> -Dfile.encoding=UTF-8]
>>
>> I would probably go with just CMS for the time being.
>>
>> I continue to investigate but cannot promise too much over the holidays.
>>
>> Michael
>>
>> Am 22.12.2013 um 21:07 schrieb Lokesh Gidra :
>>
>> All the queries are read requests. I am just trying to performance test
>> the server. So I am using only shortestPath queries.
>>
>> Please find attached the files. I have compressed the log dir as one of
>> the file was pretty big.
>>
>>
>> Thanks,
>> Lokesh
>>
>> On Sunday, December 22, 2013 6:48:52 PM UTC+1, Michael Hunger wrote:
>>>
>>> Are these queries only reading or reading and writing?
>>>
>>> can you produce a thread dump of you neo4j server when that happens ?
>>>
>>> either send a kill -3 
>>> or use jstack 
>>>
>>> and send us the thread-dump and the content of your logfiles (data/log/*
>>> and data/graph.db/messages.log)
>>>
>>> Am 22.12.2013 um 14:06 schrieb Lokesh Gidra :
>>>
>>> Hello,
>>>
>>> I am running a neo4j-2.0.0 server on a linux machine with 48-cores. I
>>> run a python script on another machine. The script uses multiple threads to
>>> perform multiple shortestPath queries to the server. I am using py2neo
>>> package in the python script. In the beginning the queries are processed
>>> fine. I can see multiple cpus being used by neo4j server in the "top"
>>> output. But suddenly, the server gets into a serial phase where only 1 cpu
>>> is used 100%. During this time, the python also doesn't make any progress.
>>>
>>> I am certain that the script is not faulty as sometimes this serial
>>> phase begins AFTER processing all the queries sent by the script, but
>>> BEFORE the script exits.
>>>
>>> Can anyone please suggest me what causes this behaviour. And what can be
>>> done to avoid it.
>>>
>>>
>>> Regards,
>>> Lokesh
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+un...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>> 
>>
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.