[Neo4j] performance of query related to direction of relationship

2014-07-16 Thread Alex winter
Hi all.
I created the database neo4j. When I imported the data to neo4j, I always 
create relationship without direction between Item and Values.
I want to find every items has value (some value): cypher query like this:
Match (nodeItem:labelItem)--(nodeValue:labelValues{property:Some Value}) 
return nodeItem.
The performance is very fast and return around 250 rows

But when I run the query with  nodeValue: with property No Value, it 
takes long time and can't finish after thirty minutes. I think that when I 
did this query, neo4j looks like full scan every node in the label Values. 
As I estimated, there are around thousand of nodeItem satisfy the condition 
of query. But it is doesn't matter in principle.

As I read in neo4j documents, we don't need create bidirection of 
relationship between two nodes. So I think that I did sth in wrong way.

But normally, when I use the tool from neo4j: (for example: 
http://localhost:7474/db/data/node/1950) and use the function all/in/out 
from the node,  I found that out direction faster than in direction.

Do you have any recommendation, and notice about config of server or design 
database to improve the performance of query to find all node has a 
relationship to one specify node. 

Thanks


-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] performance of query related to direction of relationship

2014-07-16 Thread Michael Hunger
What does No Value means? leaving off the property restriction altogether?
What do you want to achieve with that global query then?

Then Neo4j has to scan over all nodes with that label, if you have two labels 
on both sides, you can force to Neo4j to start the scan at the side with the 
fewer entries,
by USING SCAN ON (nodeItem:labelItem), see: 
http://docs.neo4j.org/chunked/milestone/query-using.html#using-hinting-a-label-scan

Otherwise as soon as you have an index or constraint on a :Label(property) pair 
cypher will use that index.

The performance of directions for relationships is only dependent on the 
numbers of rels that are returned (and how randomly they were created in the 
first place (i.e. when they have to be loaded from disk)
What version do you use?

Michael

Am 16.07.2014 um 10:43 schrieb Alex winter winter110...@gmail.com:

 Hi all.
 I created the database neo4j. When I imported the data to neo4j, I always 
 create relationship without direction between Item and Values.
 I want to find every items has value (some value): cypher query like this:
 Match (nodeItem:labelItem)--(nodeValue:labelValues{property:Some Value}) 
 return nodeItem.
 The performance is very fast and return around 250 rows
 
 But when I run the query with  nodeValue: with property No Value, it takes 
 long time and can't finish after thirty minutes. I think that when I did this 
 query, neo4j looks like full scan every node in the label Values. As I 
 estimated, there are around thousand of nodeItem satisfy the condition of 
 query. But it is doesn't matter in principle.
 
 As I read in neo4j documents, we don't need create bidirection of 
 relationship between two nodes. So I think that I did sth in wrong way.
 
 But normally, when I use the tool from neo4j: (for example: 
 http://localhost:7474/db/data/node/1950) and use the function all/in/out from 
 the node,  I found that out direction faster than in direction.
 
 Do you have any recommendation, and notice about config of server or design 
 database to improve the performance of query to find all node has a 
 relationship to one specify node. 
 
 Thanks
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Possible memory leak when using MERGE queries (2.1.2)

2014-07-16 Thread Matt Aldridge
Following up on this, my initial hypothesis of there being a problem with 
MERGE queries appears incorrect, or at least overly specific. In another 
application, I needed to add relationships among millions of nodes in a 
pre-existing graph, essentially using cypher queries following the pattern 
of MATCH (a), (b) CREATE (a)-[r]-(b). Running with a 10GB heap 
(unnecessarily large I thought but just in case) I can make it through 
about 16M queries before GC churn takes over and eventually jetty times out.

To eliminate the possibility of any bugs in py2neo's transaction/HTTP 
connection handling, I altered my code to remove the py2neo dependency and 
communicate directly with Neo4j's begin/commit endpoint 
(http://localhost:7474/db/data/transaction/commit). ... Same result.

I updated my test code gist (
https://gist.github.com/mlaldrid/85a03fc022170561b807) similarly to 
eliminate py2neo and use the begin/commit endpoint directly. Running on the 
Neo4j community edition with a 512MB heap I get GC churn at about 350K 
queries. I've downloaded a trial copy of the enterprise edition and run the 
same test code with the same heap size (512MB) for an hour/3.5M queries 
with no signs of memory leak or GC churn.

Why is this memory leak behavior different in the community and enterprise 
editions? Is it something that the enterprise edition's advanced caching 
feature solves? Is it a known but opaque limitation of the community 
edition?

Thanks,
-Matt

On Monday, July 7, 2014 8:47:43 AM UTC-4, Matt Aldridge wrote:

 FWIW I have replicated this issue on 2.0.3 as well. While GC churn does 
 kick in at approximately the same point as with 2.1.2, it is interesting to 
 note how much faster the test case cypher queries perform in 
 2.1.2--something like 50% faster! :)

 Nonetheless, the memory leak does continue to be an issue for me. AFAICT, 
 the py2neo API is properly opening, submitting, and closing the cypher 
 transactions according to spec. I'd greatly appreciate any assistance in 
 determining whether this is indeed a bug in Neo4j.

 Thanks,
 -Matt


 On Wednesday, July 2, 2014 2:20:10 PM UTC-4, Matt Aldridge wrote:

 Hi everyone,

 I have a use case that appears to expose a memory leak in Neo4j. I've 
 been testing this with Neo4j 2.1.2 on OSX.

 I've created a test case that reproduces the issue consistently and 
 mimics the behavior of my real-world application. 
 https://gist.github.com/mlaldrid/85a03fc022170561b807 This uses py2neo 
 to interface with Neo4j's Cypher transactional HTTP endpoint. To force the 
 suspected memory leak behavior to surface more quickly, I limit Neo4j's max 
 heap to 1GB. In practice I tend to use an 8GB heap, but the misbehavior 
 still occurs (albeit delayed).

 In my real-world application, we need to CREATE millions of primary nodes 
 of interest and MERGE ancillary nodes into the graph, as they can be shared 
 by any number of other primary nodes. In the test case here we give the 
 primary nodes the Person label, and the ancillary nodes are labeled Address 
 and Phone. A fixed set of Address and Phone nodes are generated and 
 randomly attached to Person nodes.

 Each Cypher transaction CREATEs 1000 Person nodes and MERGEs in 2 Address 
 and 1 Phone node for each Person. The transactions are created and then 
 committed without any intermediate executions of the open transaction.

 This log demonstrates increasingly poor load performance until finally 
 Neo4j runs out of heap space and fails a transaction:

 % time python neo4j_heap_stress.py
 2014-07-01 17:30:07,596 :: __main__ :: Generating fake data ...
 2014-07-01 17:30:31,430 :: __main__ :: Creating label indices ...
 2014-07-01 17:30:31,992 :: __main__ :: Beginning load ...
 2014-07-01 17:33:33,949 :: __main__ :: Finished batch 100
 2014-07-01 17:35:49,346 :: __main__ :: Finished batch 200
 2014-07-01 17:37:56,856 :: __main__ :: Finished batch 300
 2014-07-01 17:40:01,333 :: __main__ :: Finished batch 400
 2014-07-01 17:42:04,855 :: __main__ :: Finished batch 500
 2014-07-01 17:44:11,104 :: __main__ :: Finished batch 600
 2014-07-01 17:46:17,261 :: __main__ :: Finished batch 700
 2014-07-01 17:48:21,778 :: __main__ :: Finished batch 800
 2014-07-01 17:50:28,206 :: __main__ :: Finished batch 900
 2014-07-01 17:52:39,424 :: __main__ :: Finished batch 1000
 2014-07-01 17:54:56,618 :: __main__ :: Finished batch 1100
 2014-07-01 17:57:22,797 :: __main__ :: Finished batch 1200
 2014-07-01 18:02:27,327 :: __main__ :: Finished batch 1300
 2014-07-01 18:12:35,143 :: __main__ :: Finished batch 1400
 2014-07-01 18:24:16,126 :: __main__ :: Finished batch 1500
 2014-07-01 18:38:25,835 :: __main__ :: Finished batch 1600
 2014-07-01 18:56:18,826 :: __main__ :: Finished batch 1700
 2014-07-01 19:22:00,779 :: __main__ :: Finished batch 1800
 2014-07-01 20:17:18,317 :: __main__ :: Finished batch 1900
 Traceback (most recent call last):
   File neo4j_heap_stress.py, line 112, in module
 main()
   File neo4j_heap_stress.py, line 

[Neo4j] Re: Neo4j : restart tomcat issue :org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/dbpath/schema/label/lucene/write.lock

2014-07-16 Thread Navrattan Yadav
Hi  Michael,

Thanks for the reply. We are implemented lifecycle listener but what about 
the old DB. I checked the message.logs and it clearly mentions that DB is 
not closed in clean state.
I want to migrate and use this DB to another server and have no other 
backup. Is there any way to recover the old data from non-clean state DB.

PS: I deleted write.lock and other files but they get created again. Can 
neo4j-shell.bat would work on releasing lock from non-clean state DB.

On Saturday, April 26, 2014 5:01:23 PM UTC+5:30, Navrattan Yadav wrote:

 hi. i am using Neo4j 2.0.0-M06.

 when i restart server (tomcat )  : then got issue 
 :org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@/var/dbpath/schema/label/lucene/write.lock


 java.lang.RuntimeException: Error starting 
 org.neo4j.kernel.EmbeddedGraphDatabase, /var/database
 at 
 org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:333)
 at 
 org.neo4j.kernel.EmbeddedGraphDatabase.init(EmbeddedGraphDatabase.java:100)
 at 
 org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:92)
 at 
 org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:197)
 at 
 org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:69)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
 com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:895)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:843)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:804)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
 at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at 
 

Re: [Neo4j] Possible memory leak when using MERGE queries (2.1.2)

2014-07-16 Thread Chris Vest
Yeah, it's a known bug: https://github.com/neo4j/neo4j/pull/2690

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]


On 16 Jul 2014, at 16:51, Matt Aldridge matt.aldri...@pokitdok.com wrote:

 Following up on this, my initial hypothesis of there being a problem with 
 MERGE queries appears incorrect, or at least overly specific. In another 
 application, I needed to add relationships among millions of nodes in a 
 pre-existing graph, essentially using cypher queries following the pattern of 
 MATCH (a), (b) CREATE (a)-[r]-(b). Running with a 10GB heap (unnecessarily 
 large I thought but just in case) I can make it through about 16M queries 
 before GC churn takes over and eventually jetty times out.
 
 To eliminate the possibility of any bugs in py2neo's transaction/HTTP 
 connection handling, I altered my code to remove the py2neo dependency and 
 communicate directly with Neo4j's begin/commit endpoint 
 (http://localhost:7474/db/data/transaction/commit). ... Same result.
 
 I updated my test code gist 
 (https://gist.github.com/mlaldrid/85a03fc022170561b807) similarly to 
 eliminate py2neo and use the begin/commit endpoint directly. Running on the 
 Neo4j community edition with a 512MB heap I get GC churn at about 350K 
 queries. I've downloaded a trial copy of the enterprise edition and run the 
 same test code with the same heap size (512MB) for an hour/3.5M queries with 
 no signs of memory leak or GC churn.
 
 Why is this memory leak behavior different in the community and enterprise 
 editions? Is it something that the enterprise edition's advanced caching 
 feature solves? Is it a known but opaque limitation of the community edition?
 
 Thanks,
 -Matt
 
 On Monday, July 7, 2014 8:47:43 AM UTC-4, Matt Aldridge wrote:
 FWIW I have replicated this issue on 2.0.3 as well. While GC churn does kick 
 in at approximately the same point as with 2.1.2, it is interesting to note 
 how much faster the test case cypher queries perform in 2.1.2--something like 
 50% faster! :)
 
 Nonetheless, the memory leak does continue to be an issue for me. AFAICT, the 
 py2neo API is properly opening, submitting, and closing the cypher 
 transactions according to spec. I'd greatly appreciate any assistance in 
 determining whether this is indeed a bug in Neo4j.
 
 Thanks,
 -Matt
 
 
 On Wednesday, July 2, 2014 2:20:10 PM UTC-4, Matt Aldridge wrote:
 Hi everyone,
 
 I have a use case that appears to expose a memory leak in Neo4j. I've been 
 testing this with Neo4j 2.1.2 on OSX.
 
 I've created a test case that reproduces the issue consistently and mimics 
 the behavior of my real-world application. 
 https://gist.github.com/mlaldrid/85a03fc022170561b807 This uses py2neo to 
 interface with Neo4j's Cypher transactional HTTP endpoint. To force the 
 suspected memory leak behavior to surface more quickly, I limit Neo4j's max 
 heap to 1GB. In practice I tend to use an 8GB heap, but the misbehavior still 
 occurs (albeit delayed).
 
 In my real-world application, we need to CREATE millions of primary nodes of 
 interest and MERGE ancillary nodes into the graph, as they can be shared by 
 any number of other primary nodes. In the test case here we give the primary 
 nodes the Person label, and the ancillary nodes are labeled Address and 
 Phone. A fixed set of Address and Phone nodes are generated and randomly 
 attached to Person nodes.
 
 Each Cypher transaction CREATEs 1000 Person nodes and MERGEs in 2 Address and 
 1 Phone node for each Person. The transactions are created and then committed 
 without any intermediate executions of the open transaction.
 
 This log demonstrates increasingly poor load performance until finally Neo4j 
 runs out of heap space and fails a transaction:
 
 % time python neo4j_heap_stress.py
 2014-07-01 17:30:07,596 :: __main__ :: Generating fake data ...
 2014-07-01 17:30:31,430 :: __main__ :: Creating label indices ...
 2014-07-01 17:30:31,992 :: __main__ :: Beginning load ...
 2014-07-01 17:33:33,949 :: __main__ :: Finished batch 100
 2014-07-01 17:35:49,346 :: __main__ :: Finished batch 200
 2014-07-01 17:37:56,856 :: __main__ :: Finished batch 300
 2014-07-01 17:40:01,333 :: __main__ :: Finished batch 400
 2014-07-01 17:42:04,855 :: __main__ :: Finished batch 500
 2014-07-01 17:44:11,104 :: __main__ :: Finished batch 600
 2014-07-01 17:46:17,261 :: __main__ :: Finished batch 700
 2014-07-01 17:48:21,778 :: __main__ :: Finished batch 800
 2014-07-01 17:50:28,206 :: __main__ :: Finished batch 900
 2014-07-01 17:52:39,424 :: __main__ :: Finished batch 1000
 2014-07-01 17:54:56,618 :: __main__ :: Finished batch 1100
 2014-07-01 17:57:22,797 :: __main__ :: Finished batch 1200
 2014-07-01 18:02:27,327 :: __main__ :: Finished batch 1300
 2014-07-01 18:12:35,143 :: __main__ :: Finished batch 1400
 2014-07-01 18:24:16,126 :: __main__ :: Finished batch 1500
 2014-07-01 18:38:25,835 :: __main__ :: Finished batch 1600
 2014-07-01 18:56:18,826 :: __main__ :: 

Re: [Neo4j] How to get Clean DB State for backup

2014-07-16 Thread Chris Vest
Our enterprise edition comes with an online back-up tool that does both full 
and incremental back-ups:
http://docs.neo4j.org/chunked/stable/re04.html

For the the community edition, you can do the copying-and-zipping dance, but 
you have to shut the database down first. Otherwise the changes to the store 
files can get out of sync with what the transaction logs think is committed, 
which means the back-up you get is a database that cannot be recovered..

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]


On 16 Jul 2014, at 19:59, Sourabh Kapoor kapoor@gmail.com wrote:

 To Neo4j greatest minds:
 
 Is there any tried and tested way of doing regular backup of neo4j? I tried 
 manual copying and zipping but that results to non-clean DB state.
 
 Can anyone share the command or steps to have a clean DB state that can be 
 used for Backup.
 
 Can existing non-clean db state be cleaned somehow? i tried running 
 neo4j-shell but with no success.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] How to get Clean DB State for backup

2014-07-16 Thread Sourabh Kapoor
Thanks Chris. I am only using neo4j.jar with spring integration in embedded 
mode. Can i achieve the same with enterprise edition?

Thanks for your insight. its a great help .
On Thursday, July 17, 2014 12:08:25 AM UTC+5:30, Chris Vest wrote:

 Our enterprise edition comes with an online back-up tool that does both 
 full and incremental back-ups:
 http://docs.neo4j.org/chunked/stable/re04.html 
 http://www.google.com/url?q=http%3A%2F%2Fdocs.neo4j.org%2Fchunked%2Fstable%2Fre04.htmlsa=Dsntz=1usg=AFQjCNHMcHEtAYlkgA9GkLJfJfivvm1wDg

 For the the community edition, you can do the copying-and-zipping dance, 
 but you have to shut the database down first. Otherwise the changes to the 
 store files can get out of sync with what the transaction logs think is 
 committed, which means the back-up you get is a database that cannot be 
 recovered..

 --
 Chris Vest
 System Engineer, Neo Technology
 [ skype: mr.chrisvest, twitter: chvest ]

  
 On 16 Jul 2014, at 19:59, Sourabh Kapoor kapoo...@gmail.com javascript: 
 wrote:

 To Neo4j greatest minds:

 Is there any tried and tested way of doing regular backup of neo4j? I 
 tried manual copying and zipping but that results to non-clean DB state.

 Can anyone share the command or steps to have a clean DB state that can be 
 used for Backup.

 Can existing non-clean db state be cleaned somehow? i tried running 
 neo4j-shell but with no success.

 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+un...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] messages.log files

2014-07-16 Thread Michael Hunger
Thanks a lot Adam for sharing your results!

Your database folder name couchdb-neo4j-deft looks really interesting, I'd 
love to learn more about that project of yours :) Feel free to ping me 
privately.
Cheers,

Michael

Am 16.07.2014 um 18:44 schrieb Adam Lofts adam.lo...@gmail.com:

 I think I figured out what was going on. One of the code paths for graph 
 query was missing a finish() call. I think this was causing a lot of 
 transactions to 'work' but then eventually timeout and to dump to the 
 messages.log file.
 
 Still verifying that this is the issue but right now disk growth looks a lot 
 better.
 
 Adam  
 
 On Tuesday, 15 July 2014 10:01:59 UTC-7, Adam Lofts wrote:
 Hi Michael,
 
 Truncating it regularly would work and I could write a cron job to do it. 
 Right now I'm looking for a more robust solution since in some sense these 
 folders are managed by the neo4j and it seems dangerous to interfere with 
 them from another process. An integrated solution would still work if the 
 daemon gets re-deployed into a different folder for example.
 
 Example messages.log file (there are lots of these on the server). Note it is 
 1.5 GB in size.
 
 -rw-r--r-- 1 root root 1.5G Jul 15 16:33 
 /var/www/carboncloud/couchdb-neo4j-deft/couchdb-neo4j-deft-0.9.1/indexes/89d4d8e6-e072-44a6-9852-6598013560e3/messages.log
 
 Example content from the start of the file:
 
 2014-03-19 15:28:07,477 INFO  [neo4j.txmanager]: TM new log: tm_tx_log.1
 2014-03-19 15:28:07,483 INFO  [neo4j.xafactory]: Opened logical log 
 [/var/www/carboncloud/couchdb-neo4j-deft/couchdb-neo4j-deft-0.9.1/indexes/89d4d8e6-e072-44a6-9852-6598013560e3/index/lucene.log.1]
  version=0, lastTxId=1 (clean)
 2014-03-19 15:28:07,488 DEBUG [neo4j.diagnostics]: --- STARTUP diagnostics 
 START ---
 2014-03-19 15:28:07,488 DEBUG [neo4j.diagnostics]: Neo4j Kernel properties:
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: forced_kernel_id=
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: read_only=false
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: 
 neo4j.ext.udc.host=udc.neo4j.org
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: 
 logical_log=nioneo_logical.log
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: node_aut
 o_indexing=true
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: 
 intercept_committing_transactions=false
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: cache_type=soft
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: 
 intercept_deserialized_transactions=false
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: 
 lucene_searcher_cache_size=2147483647
 2014-03-19 15:28:07,489 DEBUG [neo4j.diagnostics]: 
 neo4j.ext.udc.interval=8640
 
 The problem may be that I open and close the index a lot. Maybe I can reduce 
 the debug output when doing this?
 
 Thanks!
 
 On Monday, 14 July 2014 17:11:58 UTC-7, Michael Hunger wrote:
 Does it help to truncate it regularly?
 
 What is the actual memory issue? Perhaps you can share your messages.log file?
 
 Michael
 
 Am 14.07.2014 um 00:11 schrieb Adam Lofts adam@gmail.com:
 
 Hi,
 
 I am running some servers which open lots of neo4j indexes. These servers 
 are 'on the limit' of memory capacity so there is a log of logging output to 
 messages.log. Is there some way I can limit the size of the messages.log 
 file or alternatively turn off all logging to this file? The disk usage just 
 from all the messages.log files is a problem.
 
 Thanks!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+un...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] How to get Clean DB State for backup

2014-07-16 Thread Chris Vest
If you configure your embedded enterprise edition to enable_online_backup=true 
then it should work.

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]


On 16 Jul 2014, at 20:47, Sourabh Kapoor kapoor@gmail.com wrote:

 Thanks Chris. I am only using neo4j.jar with spring integration in embedded 
 mode. Can i achieve the same with enterprise edition?
 
 Thanks for your insight. its a great help .
 On Thursday, July 17, 2014 12:08:25 AM UTC+5:30, Chris Vest wrote:
 Our enterprise edition comes with an online back-up tool that does both full 
 and incremental back-ups:
 http://docs.neo4j.org/chunked/stable/re04.html
 
 For the the community edition, you can do the copying-and-zipping dance, but 
 you have to shut the database down first. Otherwise the changes to the store 
 files can get out of sync with what the transaction logs think is committed, 
 which means the back-up you get is a database that cannot be recovered..
 
 --
 Chris Vest
 System Engineer, Neo Technology
 [ skype: mr.chrisvest, twitter: chvest ]
 
 
 On 16 Jul 2014, at 19:59, Sourabh Kapoor kapoo...@gmail.com wrote:
 
 To Neo4j greatest minds:
 
 Is there any tried and tested way of doing regular backup of neo4j? I tried 
 manual copying and zipping but that results to non-clean DB state.
 
 Can anyone share the command or steps to have a clean DB state that can be 
 used for Backup.
 
 Can existing non-clean db state be cleaned somehow? i tried running 
 neo4j-shell but with no success.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+un...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] How to get Clean DB State for backup

2014-07-16 Thread Chris Vest
Of course you'll also need the neo4j-backup dependency on your classpath, and 
transitively whatever that needs.

--
Chris Vest
System Engineer, Neo Technology
[ skype: mr.chrisvest, twitter: chvest ]


On 16 Jul 2014, at 20:53, Chris Vest chris.v...@neotechnology.com wrote:

 If you configure your embedded enterprise edition to 
 enable_online_backup=true then it should work.
 
 --
 Chris Vest
 System Engineer, Neo Technology
 [ skype: mr.chrisvest, twitter: chvest ]
 
 
 On 16 Jul 2014, at 20:47, Sourabh Kapoor kapoor@gmail.com wrote:
 
 Thanks Chris. I am only using neo4j.jar with spring integration in embedded 
 mode. Can i achieve the same with enterprise edition?
 
 Thanks for your insight. its a great help .
 On Thursday, July 17, 2014 12:08:25 AM UTC+5:30, Chris Vest wrote:
 Our enterprise edition comes with an online back-up tool that does both full 
 and incremental back-ups:
 http://docs.neo4j.org/chunked/stable/re04.html
 
 For the the community edition, you can do the copying-and-zipping dance, but 
 you have to shut the database down first. Otherwise the changes to the store 
 files can get out of sync with what the transaction logs think is committed, 
 which means the back-up you get is a database that cannot be recovered..
 
 --
 Chris Vest
 System Engineer, Neo Technology
 [ skype: mr.chrisvest, twitter: chvest ]
 
 
 On 16 Jul 2014, at 19:59, Sourabh Kapoor kapoo...@gmail.com wrote:
 
 To Neo4j greatest minds:
 
 Is there any tried and tested way of doing regular backup of neo4j? I tried 
 manual copying and zipping but that results to non-clean DB state.
 
 Can anyone share the command or steps to have a clean DB state that can be 
 used for Backup.
 
 Can existing non-clean db state be cleaned somehow? i tried running 
 neo4j-shell but with no success.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+un...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Any plans for an optional schema?

2014-07-16 Thread Jason Gillman Jr.
I was just wondering if the ability to utilize a schema of sorts was on the 
road map.

When I say schema, I'm thinking more along the lines of relational 
constraints.

Let's use the following simple example.

We have the following types of entities represented by node labels
(:`Server`)
(:`Switch`)
(:`Physical Interface`)

Then we would want to enforce the following relations (I would think these 
restrictions would seem intuitive):

(:`Server`)-[:`Contains`]-(:`Physical Interface`)
(:`Switch`)-[:`Contains`]-(:`Physical Interface`)
(:`Physical Interface`)-[:`Connects`]-(:`Physical Interface`)


Basically, to ensure data consistency without having to build it into an 
application, we would want it so that Neo4j would not allow, for example, a 
Server to connect to another Server, or a Switch, nor would we want to make 
a Physical Interface contain a Server.

Is something like this in the plans? Of course the use of these constraints 
would be completely optional.

Thanks!

-Jason

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Any plans for an optional schema?

2014-07-16 Thread Tom Zeppenfeldt
sounds like structr.org may be something you want to look at ..




Met vriendelijke groet / With kind regards



Ir. T. Zeppenfeldt
van der Waalsstraat 30
6706 JR  Wageningen
The Netherlands

Mobile: +31 6 23 28 78 06
Phone: +31 3 17 84 22 17
E-mail: t.zeppenfe...@ophileon.com
t.zeppenfe...@ophileon.comWeb: www.ophileon.com
Twitter: tomzeppenfeldt
Skype: tomzeppenfeldt


2014-07-16 22:28 GMT+02:00 Jason Gillman Jr. mackdaddydie...@gmail.com:

 I was just wondering if the ability to utilize a schema of sorts was on
 the road map.

 When I say schema, I'm thinking more along the lines of relational
 constraints.

 Let's use the following simple example.

 We have the following types of entities represented by node labels
 (:`Server`)
 (:`Switch`)
 (:`Physical Interface`)

 Then we would want to enforce the following relations (I would think these
 restrictions would seem intuitive):

 (:`Server`)-[:`Contains`]-(:`Physical Interface`)
 (:`Switch`)-[:`Contains`]-(:`Physical Interface`)
 (:`Physical Interface`)-[:`Connects`]-(:`Physical Interface`)


 Basically, to ensure data consistency without having to build it into an
 application, we would want it so that Neo4j would not allow, for example, a
 Server to connect to another Server, or a Switch, nor would we want to make
 a Physical Interface contain a Server.

 Is something like this in the plans? Of course the use of these
 constraints would be completely optional.

 Thanks!

 -Jason

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] LOAD CSV Neo.TransientError.Statement.ExternalResourceFailure error

2014-07-16 Thread ramiqsha


Hi there,

I’ve recently downloaded the neo4j 2.1.2 version (neo4j community edition) 
for windows 64bit. I’m trying to import a .csv file as mentioned in the 
tutorial:

http://neo4j.com/docs/2.1.2/cypherdoc-importing-csv-files-with-cypher/

I copied the following into the neo4j browser console:

*LOAD CSV WITH HEADERS FROM 
http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv 
http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv AS csvLine*

*CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })*

 

The following error occurs: 
Neo.TransientError.Statement.ExternalResourceFailure

The exact same command works on http://console.neo4j.org/. Where is the 
problem? 


Thanks for your time, I appreciate it very much!

 

Tobias

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Any plans for an optional schema?

2014-07-16 Thread Michael Hunger
Right I agree with Tom, currently you get this in structr (even when importing 
Neo4j databases, e.g. from a GraphGist).

It definitely makes sense to have a feature like that.

For Neo4j this is on the roadmap too, but not in the immediate future, it's 
more a capacity issue :)

Am 16.07.2014 um 23:18 schrieb Tom Zeppenfeldt t.zeppenfe...@ophileon.com:

 sounds like structr.org may be something you want to look at .. 
 
 
 
 
 Met vriendelijke groet / With kind regards
 
 
 
 Ir. T. Zeppenfeldt
 van der Waalsstraat 30
 6706 JR  Wageningen
 The Netherlands
 
 Mobile: +31 6 23 28 78 06
 Phone: +31 3 17 84 22 17
 E-mail: t.zeppenfe...@ophileon.com
 Web: www.ophileon.com
 Twitter: tomzeppenfeldt
 Skype: tomzeppenfeldt
 
 
 2014-07-16 22:28 GMT+02:00 Jason Gillman Jr. mackdaddydie...@gmail.com:
 I was just wondering if the ability to utilize a schema of sorts was on the 
 road map.
 
 When I say schema, I'm thinking more along the lines of relational 
 constraints.
 
 Let's use the following simple example.
 
 We have the following types of entities represented by node labels
 (:`Server`)
 (:`Switch`)
 (:`Physical Interface`)
 
 Then we would want to enforce the following relations (I would think these 
 restrictions would seem intuitive):
 
 (:`Server`)-[:`Contains`]-(:`Physical Interface`)
 (:`Switch`)-[:`Contains`]-(:`Physical Interface`)
 (:`Physical Interface`)-[:`Connects`]-(:`Physical Interface`)
 
 
 Basically, to ensure data consistency without having to build it into an 
 application, we would want it so that Neo4j would not allow, for example, a 
 Server to connect to another Server, or a Switch, nor would we want to make a 
 Physical Interface contain a Server.
 
 Is something like this in the plans? Of course the use of these constraints 
 would be completely optional.
 
 Thanks!
 
 -Jason
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] LOAD CSV Neo.TransientError.Statement.ExternalResourceFailure error

2014-07-16 Thread Michael Hunger
Can you access that csv URL from your computer in general?
Could you also check your logs (message.log in your database directory and 
server logs somewhere under Application Data I think)

It worked fine for me (on Mac though).

CYPHERLOAD CSV WITH HEADERS FROM 
http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv; AS csvLine CREATE 
(p:Person { id: toInt(csvLine.id), name: csvLine.name })
Added 5 labels, created 5 nodes, set 10 properties, returned 0 rows in 1024 ms

Am 16.07.2014 um 23:17 schrieb ramiq...@gmail.com:

 Hi there,
 
 I've recently downloaded the neo4j 2.1.2 version (neo4j community edition) 
 for windows 64bit. I'm trying to import a .csv file as mentioned in the 
 tutorial:
 
 http://neo4j.com/docs/2.1.2/cypherdoc-importing-csv-files-with-cypher/
 
 I copied the following into the neo4j browser console:
 
 LOAD CSV WITH HEADERS FROM 
 http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv; AS csvLine
 CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })
  
 
 The following error occurs: 
 Neo.TransientError.Statement.ExternalResourceFailure
 
 The exact same command works on http://console.neo4j.org/. Where is the 
 problem? 
 
 
 
 Thanks for your time, I appreciate it very much!
 
  
 
 Tobias
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] neo4j-shell with SSH tunneling

2014-07-16 Thread Chaofeng
Hi all,

I set the neo4j db at a remote machine. Now, I want to communicate with it 
via neo4j-shell. I tried SSH tunneling to forward local port 10016 (e.g.) 
to remote port 1337. However, I cannot connect to the remote neo4j db via 
$bin/neo4j-shell -port 10016 at the local machine. It said ERROR (-v for 
expanded information):
error during JRMP connection establishment; nested exception is: 
java.io.EOFException


Do anyone have similar question and solve it? Or it is just not feasible to 
use this way. But, do you have any other suggestions for remote shell 
without adding IP address?


Chaofeng

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] LOAD CSV Neo.TransientError.Statement.ExternalResourceFailure error

2014-07-16 Thread ramiqsha
Hi Michael,

I can access the csv URL from my computer, there's no problem (and, as a 
reminder, the command works on the online console 
http://console.neo4j.org/. Sorry, but I can't find the log files you ask me 
to look for...there's one neo4j folder on my computer, called 
default.graphdb. Inside, I can't find any log files.

The main reason why I want to switch from the online console 
http://console.neo4j.org/ is that I hope to be able to import a larger 
dataset (50.000 nodes). Is that the case?

Can you provide further assistance? I'll appreciate it very much!

Thanks again,
Tobias

Am Mittwoch, 16. Juli 2014 15:05:31 UTC-7 schrieb Michael Hunger:

 Can you access that csv URL from your computer in general?
 Could you also check your logs (message.log in your database directory and 
 server logs somewhere under Application Data I think)

 It worked fine for me (on Mac though).

 CYPHER
 LOAD CSV WITH HEADERS FROM 
 http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv; AS csvLine 
 CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })
 Added 5 labels, created 5 nodes, set 10 properties, returned 0 rows in 
 1024 ms

 Am 16.07.2014 um 23:17 schrieb rami...@gmail.com javascript::

 Hi there,

 I’ve recently downloaded the neo4j 2.1.2 version (neo4j community edition) 
 for windows 64bit. I’m trying to import a .csv file as mentioned in the 
 tutorial:

 http://neo4j.com/docs/2.1.2/cypherdoc-importing-csv-files-with-cypher/

 I copied the following into the neo4j browser console:

 *LOAD CSV WITH HEADERS FROM 
 http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv 
 http://docs.neo4j.org/chunked/2.1.2/csv/import/persons.csv AS csvLine*

 *CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })*

  

 The following error occurs: 
 Neo.TransientError.Statement.ExternalResourceFailure

 The exact same command works on http://console.neo4j.org/. Where is the 
 problem? 


 Thanks for your time, I appreciate it very much!

  

 Tobias

 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+un...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Any plans for an optional schema?

2014-07-16 Thread Alan Robertson

Hi Jason,

From your examples, you should look at the Assimilation Project!

http://assimilationsystems.com/
http://assimproj.org/

This is exactly the kind of data modeling we're doing - together with 
automating the collection of the data and keeping it up to date.  It's a 
very interesting project (IMHO).



On 07/16/2014 02:28 PM, Jason Gillman Jr. wrote:
I was just wondering if the ability to utilize a schema of sorts was 
on the road map.


When I say schema, I'm thinking more along the lines of relational 
constraints.


Let's use the following simple example.

We have the following types of entities represented by node labels
(:`Server`)
(:`Switch`)
(:`Physical Interface`)

Then we would want to enforce the following relations (I would think 
these restrictions would seem intuitive):


(:`Server`)-[:`Contains`]-(:`Physical Interface`)
(:`Switch`)-[:`Contains`]-(:`Physical Interface`)
(:`Physical Interface`)-[:`Connects`]-(:`Physical Interface`)


Basically, to ensure data consistency without having to build it into 
an application, we would want it so that Neo4j would not allow, for 
example, a Server to connect to another Server, or a Switch, nor would 
we want to make a Physical Interface contain a Server.


Is something like this in the plans? Of course the use of these 
constraints would be completely optional.


Thanks!

-Jason
--
You received this message because you are subscribed to the Google 
Groups Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to neo4j+unsubscr...@googlegroups.com 
mailto:neo4j+unsubscr...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim from you 
at all times your undisguised opinions. - William Wilberforce

--
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.