Re: [Neo4j] CPU spikes really high as the data size increases to 50 MB

2015-05-10 Thread Michael Hunger
> Am 11.05.2015 um 03:29 schrieb Arun Kumar : > > Thanks Michael.. We will immediately start working on upgrading the Neo4j > version.. > > At any given time we would have around 20 movies on an average in the > system.. Not much.. > > "This is another cross-product query. What is "a" a movie

Re: [Neo4j] CPU spikes really high as the data size increases to 50 MB

2015-05-10 Thread Arun Kumar
Thanks Michael.. We will immediately start working on upgrading the Neo4j version.. At any given time we would have around 20 movies on an average in the system.. Not much.. "This is another cross-product query. What is "a" a movie or actor? you should make sure that the lookup of a is done vi

Re: [Neo4j] CPU spikes really high as the data size increases to 50 MB

2015-05-10 Thread Michael Hunger
You should really update to a newer version of Neo4j. With 2.2 you get also visual query plan profiling, that should help you a lot. most of your queries create way too much intermediate data. Perhaps also get some hands on consulting / help for writing your queries. Michael Some tips inline

Re: [Neo4j] Schema#awaitIndexOnline forever?

2015-05-10 Thread Michael Hunger
Florent, can you best raise that as an GitHub issue? How much data is in your test-database? What happens if you run the await in a separate tx ? Michael > Am 10.05.2015 um 14:49 schrieb Florent Biville : > > Hi, > > I'm trying to run the following snippet (with Neo4j v2.2.1 / impermanent >

Re: [Neo4j] Memory Leak : only reading and traversing nodes

2015-05-10 Thread Michael Hunger
Perhaps you can share your full code? If you are keeping the paths around, how many elements (paths) are in that list? Michael > Am 10.05.2015 um 03:04 schrieb Justin Wong : > > Hi, > > I'm using the community edition. > > I only have 2 nodes, bus and stop. > Relationship properties have arri

[Neo4j] Memory Leak : only reading and traversing nodes

2015-05-10 Thread Justin Wong
Hi, I'm using the community edition. I only have 2 nodes, bus and stop. Relationship properties have arrival and departure time. I have two inputs start and end point of a passenger at a particular time. I loop through each input to determine their path, a custom expander is to filter and find

[Neo4j] Re: When not to use Neo4j

2015-05-10 Thread Mike Holdsworth
I am less and less convinced that we should ever rely on a single database as "the best candidate" for persistence - this is the Oracle or SQLServer discussion based on the misguided notion that simplification comes from consolidating data in to a single vendor product. What differentiates dat

Re: [Neo4j] CPU spikes really high as the data size increases to 50 MB

2015-05-10 Thread Arun Kumar
Michael, Thanks for looking in to this.. We use Neo4j as recommendation engine... We have movies, classifieds services listed in our site.. We recommend movies or classifieds to our customers based on their browsing behaviors... Below are some of the CQL's, we use.. 1. Movie recommendation C

Re: [Neo4j] array size exceeds maximum allowed size

2015-05-10 Thread Chris Vest
I think this might be caused by a miscalculation in the High Performance Cache settings heuristics. Does the problem go away if you change the cache_type setting away from “hpc” (which is the default in our enterprise edition), or use the 2.3-M1 milestone? By the way, the “dbms.pagecache.memory

[Neo4j] Re: org.neo4j.graphdb.NotFoundException: RELATIONSHIP[7141600] has no property with propertyKey="__type__".

2015-05-10 Thread BtySgtMajor
Also come across this and wondering if there's been any word on it. On Wednesday, December 3, 2014 at 7:16:58 AM UTC-5, Mamta Thakur wrote: > > Hi, > > We are using neo4j 2.0.3 and SDN (3.1.0). > > We are getting this error when trying to execute this cypher with > repository. > @Query("MATCH (n:

Re: [Neo4j] CPU spikes really high as the data size increases to 50 MB

2015-05-10 Thread Michael Hunger
What are you doing? Can you share the type of workload / queries / code that you run? Which version are you using? According toy our messages.log it spends all time trying to free memory (causing the spike). > wrapper.java.maxmemory=800 -> you forgot to add a suffix here, so you do 800 bytes o

[Neo4j] Schema#awaitIndexOnline forever?

2015-05-10 Thread Florent Biville
Hi, I'm trying to run the following snippet (with Neo4j v2.2.1 / impermanent graph database): try (Transaction tx = graphDB.beginTx()) { IndexDefinition definition = graphDB.schema() .indexFor(Labels.ARTIST) .on("name") .create(); graphDB.schema().awaitIndexOnli

[Neo4j] Re: When not to use Neo4j

2015-05-10 Thread Florent Biville
I would say whenever the data set cannot fit on a single machine. This is not a definitive nogo as you can always shard the data yourself and spread across several (preferably co-located) Neo4j instances but this is far from ideal in most cases, I guess. On Saturday, 9 May 2015 11:07:15 UTC+2,