Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread Michael Hunger
Can you share your code and the full exception? As well as the 
graph.db/messages.log file

Thx

Sent from mobile device

Am 13.07.2014 um 19:34 schrieb José Cornado jose.corn...@gmail.com:

 Hello!
 
 I am running int the following:
 
 After inserting around 30,000 nodes to a graph, I close the operation with 
 transaction success. But the neo returns:
 
 Unable to commit transaction.
 
 Is there a hard limit on the size of a transaction? Doing it on a per node 
 basis is too slow.
 
 Thanks a lot!!!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Error starting org.neo4j.kernel.EmbeddedGraphDatabase

2014-07-14 Thread roald . targe
Hi

In fact I had version conflict of neo4j jars.

Thanks and regards,

Roald

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Modeling hierarchies for relationships

2014-07-14 Thread Benny Kneissl
Hi Lundin,


I found now an real-world example and maybe you can comment this. In this 
gene-disease association ontology you have to connect nodes representing 
genes with nodes representing diseases by their association type. Let's 
suppose gene A has a GeneticVariationAssociation to disease B, then I have 
to add 4 relationships (GeneticVariationAssociation, BiomarkerAssociation, 
GeneDiseaseAssociation, Association) between A and B. Is it recommended to 
do it this way or are there smarter possibilities?

https://lh6.googleusercontent.com/-bPKb2sdcb4E/U8OXnr1U8eI/ACQ/8P0IaychxT0/s1600/DisGeNET.png





On Saturday, April 19, 2014 6:18:49 PM UTC+2, Lundin wrote:

 Hi Benny,

 In your examples, which seems to have an very finite numbers of 
 relationships types, i would go for adding relationship vs properties. Thus 
 the traversal can be done cheap rather than involve properties that would 
 be needed in the look-up. This is the best design performance wise. But of 
 course if your domain-model involves nodes that becomes dense with millions 
 of outgoing relationship and the number of relationship cant so easily be 
 forseen and you want query from that node i would think adding a properties 
 make sense.

 Here is actually a good blog post on the topic:
 http://graphaware.com/neo4j/2013/10/24/neo4j-qualifying-relationships.html

 It is very hard without further insight to say exactly how to model your 
 domain.

 And dont fortget that you can also limit the serach result by a type as 
 well, as in 

 (x)-[r]-(y) where type(r)=IS_DAUGHETR_OF

 Mabey you could test some CSV data of a known domain, import it and try 
 some models and find out ? I would be happy to read such a report.

 Den onsdagen den 16:e april 2014 kl. 14:09:48 UTC+2 skrev Benny Kneissl:

 Hi,

 as far as I know the smartest way to store hierarchies for node entities 
 is to use the new label feature. Lets's suppose an entity is of type B 
 where B is a subclass of A. Then the node is labeled by both A and B, right?

 But what about hierarchies for relationships? Should several 
 relationships be stored between two entities to model hierarchies for 
 relationships? Should the type of the relationship differ or is it more 
 meaningful to have the same type but different properties?

 A possible example is that isDaughterOf, isSonOf are subtypes of 
 isChildOf when modeling a family tree. Or from biology when having a 
 BiochemicalReaction you might want to model isParticipantOf, isEductOf, 
 isProductOf.

 In this simple hierarchy I think it is sufficient when asking for all 
 children to traverse both relationship types, but the hierarchy might 
 become more complex and then, it is likely that you forget one relationship 
 type in Cypher (  (x)-[r:IS_DAUGHTER_OF | IS_SON_OF]-(y)   ). If you use 
 only one type ((x)-[r:IS_CHILD_OF]-(y)) you have to add a property 
 daughter / son to ask only for daughter/son. So what is a good way 
 (performance, complexity in formulating a query) to do it in Neo4j? Adding 
 more relationships, or adding more properties?

 Currently I don't know what are the advantages for the different 
 approaches, in particular, with respect to formulate queries afterwards.

 Thank you for some ideas you have in mind,

 Benny



-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] cut vertex

2014-07-14 Thread 杨军
hi,at neo4j find cut vertex ?


-- 

Best regards,
yang jun
Tel:  028-85126877
Mob:  18608027881 mailto:yangjun.chen...@gmail.com yangjun.chen...@gmail.com
Chat: yangjun.chen...@gmail.com(gtalk)

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Cypher / Performance Question

2014-07-14 Thread Michael Hauck
Hi,

my Graph question is:
Which Player played most frequently against each other

I queried following Cypher:

match 
(p:Player)-[:PLAYS_IN]-(t:Team)-[:PLAYED]-(g:Game)-[:PLAYED]-(tt:Team)-[:PLAYS_IN]-(pp:Player)
 
USING SCAN p:Player USING SCAN pp:Player  
return p.Lastname, pp.Lastname, count(pp) order by count(pp) desc limit 100;

Number of 
Player: 158310
Game: 215068
Team: 218960

Heap is 6G, CPU are 4 cores.

This Query does take hours and does not finish.
How should i optimize it?

Thanks,
Michael




-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Feature request: 'virtual' relationships in result sets

2014-07-14 Thread Mars Agliullin
Hello, group

I have a use case for 'virtual' (i.e. created on the fly, not persistent in 
DB) relationships. Say, we're looking for pairs of nodes (n1), (n2) in DB, 
that are related somehow (e.g. traversable from n1 to n2). We're not 
interested in intermediate nodes or relationships between n1 and n2. 
Besides n1 and n2 (and their pairing)  result set contains other 
components; e.g.:

match (n0)-[r]-(n1)-[*1..10]-(n2)
where ...
return n0, r, [n1, n2]

If graph format is used for results (good for its brevity), we either get 
the whole subgraph including components of all paths from n1 to n2, which 
may be huge and is not needed, or lose pairing between n1 and n2. A better 
alternative would be to return n1, n2 and a 'virtual' relationship from n1 
to n2:

match (n0)-[r]-(n1)-[*1..10]-(n2)
where ...
return n0, r, n1, n2, relationship(n1, n2, Some label, { name: Some 
name })

, where relationship() is a proposed function, returning 'virtual' 
relationships.

Any ideas?

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] messages.log files

2014-07-14 Thread Adam Lofts
Hi,

I am running some servers which open lots of neo4j indexes. These servers 
are 'on the limit' of memory capacity so there is a log of logging output 
to messages.log. Is there some way I can limit the size of the messages.log 
file or alternatively turn off all logging to this file? The disk usage 
just from all the messages.log files is a problem.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Pros and cons about graph databses and especially Neo4j

2014-07-14 Thread Shireesh

  I am still confused with *schema-less nature.*

  As as can see it, still Neo4j gives us tightly coupled architecture.

  Imagine the Graph grows big as the project progresses and one day we got 
a new requirement which makes us to introduce new node between existing 
structure.
  Now this will have a cascading effect all over the graph. all the 
existing traversals needs to be reworked to include the new node and 
relationship.

  Which will have impact on all the components as the whole Graph is 
connected.
  
  Am i missing anything ?

  Thanks,
  Shireesh.


On Monday, 4 June 2012 09:20:54 UTC-5, Charles Bedon wrote:

 Hello

 For me the best advantage of using a NOSQL approach is its schema-less 
 nature. It's also a disadvantage if you consider that it's now your 
 responsibility to ensure the model integrity, but it gives you a lot of 
 freedom if you have to mess with it at runtime (I mean, if the application 
 requires it).

 -
 Charles Edward Bedón Cortázar
 Network Management, Data Analysis and Free Software 
 http://www.neotropic.co | Follow Neotropic on Twitter 
 http://twitter.com/neotropic_co
 Open Source Network Inventory for the masses!  
 http://kuwaiba.sourceforge.net | Follow Kuwaiba on Twitter 
 http://twitter.com/kuwaiba
 Linux Registered User #38


  Am Mon, 04 Jun 2012 08:40:45 -0500 *Johnny Weng Luu 
 johnny@gmail.com javascript:* schrieb  

 It's hard to imagine data with no relations.

 Sooner or later I think you would like to have relations between different 
 data entities.

 Everything is connected.

 Johnny

 On Monday, June 4, 2012 2:20:19 PM UTC+2, Radhakrishna Kalyan wrote:

 Hi 

 This was my first question to Peter on his presentation in Karlskrona in 
 2011 Dev-Con.

 As it always mentioned and I too realized that NoSql does not say not to 
 use relational database, but suggests to replace relational database with 
 Neo4J where one see relations(Complex/Non-Complex) among data.
 I hope you agree.

 I do agree that neo4j is not a silver bullet for every case.

 I see it like this: 
 I will* NOT *use Neo4J in an application if:

 1) The application have only tables with no relations among them. i.e No 
 foreign key relation among tables.
 2) If the application is a legacy application like Mainframes and DB2 
 containing stored procedures etc. where migrating to a new DB is a major 
 issue. 
 3) If the application code contains hard coded SQL queries to fetch the 
 data from the database which makes it hard to migrate.

 These are the few cases I found when I was looking to migrate my own 
 application built on Swing and SqlLite as backend. I used Sql queries with 
 in my code.

 I would have been saved if I would have used JPA. Because thanks to 
 Spring-Data-Neo4J where there is a support for cross storage. 
 It means that the application can persist to Neo4J and any relational db 
 using the same entity.
 Please consider looking to Spring-Data-Neo4J.

 Please comment if there is any misconception in my opinion.

 Kalyan




 On Mon, Jun 4, 2012 at 11:40 AM, Michel Domenjoud mdome...@octo.com 
 javascript: wrote:

 Hello,
 I'm currently working on graph databases as an RD subject, and I'm 
 looking for good references about graph databases pros and cons.
 I already watched and read a lot of good articles about graph databases, 
 about their position in the NoSQL ecosystem, and also some benchmarks and 
 performance comparison towards relational databases. So, I have a lot of 
 pros arguments to uses graph databases : good fit for higly connected data, 
 powerful for traversals, etc. but also some examples that show use cases 
 usually adressed with relational databases (CMS, e-commerce...)

 Now I'm really convinced that graph databases and especially Neo4j fits a 
 lot of use cases, and as I was trying to convince some collegues about 
 Neo4j benefits, I realized that I lack some cons about graph databases vs. 
 relational databases and I was almost arguing Neo4j as a silver bullet, 
 which can't be true.

 So here is my question : does anybody have some references, precise 
 arguments or use cases that don't fit in graph databases but fit really 
 better in relational databases?
 I already have some, but I intentionally don't put anything for the moment 
 in order to start an open debate :)

 Thanks by advance for your answers!




 -- 
 Thanks and Regards
 N Radhakrishna Kalyan
  



-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Do you know how to clear cache in Neo4j?

2014-07-14 Thread Nigel Small
I think Michael's question was more to ask why there is value in measuring
uncached performance at all. In production, caches are typically in play
and so the performance statistics for warm caches are far more useful for
comparison purposes.

Nigel
On 14 Jul 2014 11:27, jean caca cacaj...@gmail.com wrote:

 He just told you why.

 I'd be interested in knowing how to clear that cache as well, for the same
 purposes.

 Thanks,
 Jean

 Le vendredi 8 février 2013 15:35:41 UTC-8, Michael Hunger a écrit :

 Why would you want to clear the cache?

 Cold / uncached performance is not what you aim for in production
 settings.

 Michael

 Am 08.02.2013 um 22:00 schrieb i28...@gmail.com:

 Hi~~~

 I have done some test for benchmarking purposes in Neo4j.

 But, I got some different result because I didn't remove(clear) cache.

 So, I have found a method about flushing cache
 http://lists.neo4j.org/pipermail/user/2010-December/006049.html;

 However, it is old version.

 To clear cache, what should I do?

 Thanks a lot

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+un...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




  --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Do you know how to clear cache in Neo4j?

2014-07-14 Thread Stefan Armbruster
Neo4j Enterprise edition features JMX beans for NodeCache and
Relationship cache. These beans have a clear() method that clears the
respective cache, see
http://docs.neo4j.org/chunked/stable/jmx-mxbeans.html#jmx-cache-nodecache

/Stefan

2014-07-14 14:07 GMT+02:00 Nigel Small ni...@nigelsmall.com:
 I think Michael's question was more to ask why there is value in measuring
 uncached performance at all. In production, caches are typically in play and
 so the performance statistics for warm caches are far more useful for
 comparison purposes.

 Nigel

 On 14 Jul 2014 11:27, jean caca cacaj...@gmail.com wrote:

 He just told you why.

 I'd be interested in knowing how to clear that cache as well, for the same
 purposes.

 Thanks,
 Jean

 Le vendredi 8 février 2013 15:35:41 UTC-8, Michael Hunger a écrit :

 Why would you want to clear the cache?

 Cold / uncached performance is not what you aim for in production
 settings.

 Michael

 Am 08.02.2013 um 22:00 schrieb i28...@gmail.com:

 Hi~~~

 I have done some test for benchmarking purposes in Neo4j.

 But, I got some different result because I didn't remove(clear) cache.

 So, I have found a method about flushing cache
 http://lists.neo4j.org/pipermail/user/2010-December/006049.html;

 However, it is old version.

 To clear cache, what should I do?

 Thanks a lot

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+un...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
Let .me reproduce it again. I was able to wrap 29,000 in a transaction
(side note)


On Mon, Jul 14, 2014 at 12:19 AM, Michael Hunger 
michael.hun...@neotechnology.com wrote:

 Can you share your code and the full exception? As well as the
 graph.db/messages.log file

 Thx

 Sent from mobile device

 Am 13.07.2014 um 19:34 schrieb José Cornado jose.corn...@gmail.com:

 Hello!

 I am running int the following:

 After inserting around 30,000 nodes to a graph, I close the operation with
 transaction success. But the neo returns:

 Unable to commit transaction.

 Is there a hard limit on the size of a transaction? Doing it on a per node
 basis is too slow.

 Thanks a lot!!!

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

  --
 You received this message because you are subscribed to a topic in the
 Google Groups Neo4j group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/neo4j/ABfjS4yZeJM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




-- 
José Cornado

--

home: http://www.efekctive.com
blog:   http://blogging.efekctive.com
--

Everything has been said before, but since nobody listens we have to keep
going back and beginning all over again.

Andre Gide

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Cypher / Performance Question

2014-07-14 Thread Michael Hunger
I think you create a cross product of your 150k players: which is around 22 
billion combinations.

I rather think this makes more sense:

 match (p:Player)-[:PLAYS_IN]-(t:Team)
with distinct t, collect(p) as players
 MATCH (t)-[:PLAYED]-(g:Game)-[:PLAYED]-(tt:Team)
with distinct tt, players
 match (tt)-[:PLAYS_IN]-(pp:Player) 
unwind players as p
 return p.Lastname, pp.Lastname, count(*) as frequency

 order by frequency desc

 limit 100;

Michael

Am 14.07.2014 um 12:00 schrieb Michael Hauck hau...@gmail.com:

 Hi,
 
 my Graph question is:
 Which Player played most frequently against each other
 
 I queried following Cypher:
 
 match 
 (p:Player)-[:PLAYS_IN]-(t:Team)-[:PLAYED]-(g:Game)-[:PLAYED]-(tt:Team)-[:PLAYS_IN]-(pp:Player)
  
 USING SCAN p:Player USING SCAN pp:Player  
 return p.Lastname, pp.Lastname, count(pp) order by count(pp) desc limit 100;
 
 Number of 
 Player: 158310
 Game: 215068
 Team: 218960
 
 Heap is 6G, CPU are 4 cores.
 
 This Query does take hours and does not finish.
 How should i optimize it?
 
 Thanks,
 Michael
 
 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Re: Modeling hierarchies for relationships

2014-07-14 Thread Michael Hunger
Hi Benny, 

perhaps it makes sense to cross-post this model to the neo4j-biotech group 
whose members are more involved with biological models?

Cheers,

Michael

Am 14.07.2014 um 10:48 schrieb Benny Kneissl benny.knei...@googlemail.com:

 Hi Lundin,
 
 
 I found now an real-world example and maybe you can comment this. In this 
 gene-disease association ontology you have to connect nodes representing 
 genes with nodes representing diseases by their association type. Let's 
 suppose gene A has a GeneticVariationAssociation to disease B, then I have to 
 add 4 relationships (GeneticVariationAssociation, BiomarkerAssociation, 
 GeneDiseaseAssociation, Association) between A and B. Is it recommended to do 
 it this way or are there smarter possibilities?
 
 
 
 
 
 
 
 
 On Saturday, April 19, 2014 6:18:49 PM UTC+2, Lundin wrote:
 Hi Benny,
 
 In your examples, which seems to have an very finite numbers of relationships 
 types, i would go for adding relationship vs properties. Thus the traversal 
 can be done cheap rather than involve properties that would be needed in the 
 look-up. This is the best design performance wise. But of course if your 
 domain-model involves nodes that becomes dense with millions of outgoing 
 relationship and the number of relationship cant so easily be forseen and you 
 want query from that node i would think adding a properties make sense.
 
 Here is actually a good blog post on the topic:
 http://graphaware.com/neo4j/2013/10/24/neo4j-qualifying-relationships.html
 
 It is very hard without further insight to say exactly how to model your 
 domain.
 
 And dont fortget that you can also limit the serach result by a type as well, 
 as in 
 
 (x)-[r]-(y) where type(r)=IS_DAUGHETR_OF
 
 Mabey you could test some CSV data of a known domain, import it and try some 
 models and find out ? I would be happy to read such a report.
 
 Den onsdagen den 16:e april 2014 kl. 14:09:48 UTC+2 skrev Benny Kneissl:
 Hi,
 
 as far as I know the smartest way to store hierarchies for node entities is 
 to use the new label feature. Lets's suppose an entity is of type B where B 
 is a subclass of A. Then the node is labeled by both A and B, right?
 
 But what about hierarchies for relationships? Should several relationships be 
 stored between two entities to model hierarchies for relationships? Should 
 the type of the relationship differ or is it more meaningful to have the same 
 type but different properties?
 
 A possible example is that isDaughterOf, isSonOf are subtypes of 
 isChildOf when modeling a family tree. Or from biology when having a 
 BiochemicalReaction you might want to model isParticipantOf, isEductOf, 
 isProductOf.
 
 In this simple hierarchy I think it is sufficient when asking for all 
 children to traverse both relationship types, but the hierarchy might become 
 more complex and then, it is likely that you forget one relationship type in 
 Cypher (  (x)-[r:IS_DAUGHTER_OF | IS_SON_OF]-(y)   ). If you use only one 
 type ((x)-[r:IS_CHILD_OF]-(y)) you have to add a property daughter / son to 
 ask only for daughter/son. So what is a good way (performance, complexity in 
 formulating a query) to do it in Neo4j? Adding more relationships, or adding 
 more properties?
 
 Currently I don't know what are the advantages for the different approaches, 
 in particular, with respect to formulate queries afterwards.
 
 Thank you for some ideas you have in mind,
 
 Benny
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Neo4j : restart tomcat issue :org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/dbpath/schema/label/lucene/write.lock

2014-07-14 Thread Kamal Jain
Please help me about how to implement  lifecycle listener because i have DB 
that has this problem so i want to resolve it so please help me

On Saturday, April 26, 2014 5:01:23 PM UTC+5:30, Navrattan Yadav wrote:

 hi. i am using Neo4j 2.0.0-M06.

 when i restart server (tomcat )  : then got issue 
 :org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@/var/dbpath/schema/label/lucene/write.lock


 java.lang.RuntimeException: Error starting 
 org.neo4j.kernel.EmbeddedGraphDatabase, /var/database
 at 
 org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:333)
 at 
 org.neo4j.kernel.EmbeddedGraphDatabase.init(EmbeddedGraphDatabase.java:100)
 at 
 org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:92)
 at 
 org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:197)
 at 
 org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:69)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
 com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:895)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:843)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:804)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
 at 
 org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
 at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
 at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:987)
 at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:579)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 
 'org.neo4j.kernel.impl.transaction.XaDataSourceManager@7685a519' was 
 successfully initialized, 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
It is crashing before it gets where the original exception occurred.

java.lang.OutOfMemoryError: GC overhead limit exceeded

at
org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(
KernelTransactionImplementation.java:182)

at
org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(
KernelTransactionImplementation.java:63)

at org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance(
ThreadToStatementContextBridge.java:47)

at org.neo4j.kernel.impl.core.NodeProxy.addLabel(NodeProxy.java:468)

at MY LOGIC

at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:220)

at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)

at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4166)

at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1466)

at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1489)

at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1474)

at org.eclipse.swt.widgets.Widget.notifyListeners(Widget.java:1279)

at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4012)

at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3651)

at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(
PartRenderingEngine.java:1113)

at org.eclipse.core.databinding.observable.Realm.runWithDefault(
Realm.java:332)

at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(
PartRenderingEngine.java:997)

at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(
E4Workbench.java:138)

at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:610)

at org.eclipse.core.databinding.observable.Realm.runWithDefault(
Realm.java:332)

at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(
Workbench.java:567)

at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)

at org.eclipse.ui.internal.ide.application.IDEApplication.start(
IDEApplication.java:124)

at org.eclipse.equinox.internal.app.EclipseAppHandle.run(
EclipseAppHandle.java:196)

at
org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(
EclipseAppLauncher.java:110)

at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(
EclipseAppLauncher.java:79)

at org.eclipse.core.runtime.adaptor.EclipseStarter.run(
EclipseStarter.java:354)


On Mon, Jul 14, 2014 at 6:52 AM, José Cornado jose.corn...@gmail.com
wrote:

 Let .me reproduce it again. I was able to wrap 29,000 in a transaction
 (side note)


 On Mon, Jul 14, 2014 at 12:19 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 Can you share your code and the full exception? As well as the
 graph.db/messages.log file

 Thx

 Sent from mobile device

 Am 13.07.2014 um 19:34 schrieb José Cornado jose.corn...@gmail.com:

 Hello!

 I am running int the following:

 After inserting around 30,000 nodes to a graph, I close the operation
 with transaction success. But the neo returns:

 Unable to commit transaction.

 Is there a hard limit on the size of a transaction? Doing it on a per
 node basis is too slow.

 Thanks a lot!!!

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

  --
 You received this message because you are subscribed to a topic in the
 Google Groups Neo4j group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/neo4j/ABfjS4yZeJM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




 --
 José Cornado

 --

 home: http://www.efekctive.com
 blog:   http://blogging.efekctive.com
 --

 Everything has been said before, but since nobody listens we have to keep
 going back and beginning all over again.

 Andre Gide




-- 
José Cornado

--

home: http://www.efekctive.com
blog:   http://blogging.efekctive.com
--

Everything has been said before, but since nobody listens we have to keep
going back and beginning all over again.

Andre Gide

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


messages.log
Description: Binary data


[Neo4j] Re: ACL schema with Neo4j, data in MySQL/MongoDB

2014-07-14 Thread Benjamin Makus
I now decided to use Neo4j as my main DB, this will simplify ACL stuff very 
much.

So here's my basic schema:
node User {
  String username
}

node Role {
  String name
}

relationship MEMBER_OF { }

relationship PARENT_OF { }

relationship HAS_PERMISSION {
  boolean read
  boolean update
  boolean delete
  boolean ...
}


   - Each *User* can be *MEMBER_OF* many roles.
   - Each *User* can have a *HAS_PERMISSION* relation to every other node 
   (i.e. *Article*, *Event*, ...).
   - Each *Role* can have a *HAS_PERMISSION* relation to every other 
   node (i.e. *Article*, *Event*, ...).
   - Each *HAS_PERMISSION* relation defines what is allowed and what's not 
   allowed. This can be different actions dependent on the node, i.e. 
   *Article* has an *addComment* and a *publish* Permission, whereas 
   *Event* doesn't have those Permissions.
   - Each secured node can have a *PARENT_OF *relation to another node, 
   *but* for example *Article* will never have a parent, because it's 
   always at root level of my application.


This approach looks very flexible, but I'm stuck with the queries... I had 
a look at your ACL example, 
here: 
http://docs.neo4j.org/chunked/stable/examples-acl-structures-in-graphs.html


   1. *Reference Node?*
  - Is there any reason to use a Reference Node instead of Labels?
   2. *Query #1:*
  - *Input: *User node u, some secured Node s
  - *Query: *Find the first HAS_PERMISSION relation, that connects u 
  and s
  - *Returns: *the HAS_PERMISSION relation
  - I thought about using shortestPath(), but that doesn't fit in all 
  situations.
   3. *Query #2:*
  - *Input: *User node u, some Label l, a Permission p
  - *Query: *Find all nodes, labeled with Label l, where User u is 
  somehow related to via HAS_PERMISSION and HAS_PERMISSION has Permission p 
  set to true
  - *Returns: *List of nodes, labeled with Label l
   
Basically both queries can use a similar algorithm, but I need some kind of 
precedence in it, so it will find the right HAS_PERMISSION relation. It's 
like:

   1. User takes precedence over Role
   2. Secured Node takes precedence over its parent

This means: If I have a graph, where

   - User u has Write Access to the Parent p of Secured Node s
   - Some Role r of the User u has only Read Access to Secured Node s

Then the User u will have write access to Secured Node s, because it has to 
match the Users permissions first.

And there's another scenario:

   - User's Role *r1* has Write Access = false for Secured Node s
   - User's Role *r2* has Write Access = true for Secured Node s

Then of the write access must be granted.


Sorry for this huge post, but I can't figure out how to do that...

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Pros and cons about graph databses and especially Neo4j

2014-07-14 Thread Benjamin Makus
That is a problem in your applications architecture. If you use MySQL and 
have a on-to-many relation between A and B, and now you need to store an 
entity C between A and B, you've got to alter the schema and run an update 
on all entries.
(if you can tell us a solution that works in SQL, than there's a 99% chance 
that it works, too, in Neo4j)

Schemaless means, that each node (and relation) can store whatever you want 
it to. Node 1 can have { a: true, b: B } and node 2 can have { a: 42, b: 
[A, B], c: false }. So there's no schema, that means: you can't say all 
a-properties are of type boolean, and you can't say every node has 3 
properties.

Btw: If you've got a need, to add a new node between some existing nodes, 
then Neo4j won't care. You can do whatever you want to:
Node 1 is related to Node 2
and after the update you can have: Node 1 is related to Node 2 and Node 1 
is related to Node 3, which is related to Node 2. No problem.

Again: There's no schema that says, that Node 1 can only have 1 relation, 
it can have as many you like and it can relate to every other node, no 
matter what this node is in your application.

For Neo4j, all your data are just nodes, nothing more. They've got no type 
and arbitrary content. If your application says that Node 1 is a Car and 
Node 2 is a CryptoKey, you can still tell the database to relate them.

Am Montag, 14. Juli 2014 13:59:44 UTC+2 schrieb Shireesh:


   I am still confused with *schema-less nature.*

   As as can see it, still Neo4j gives us tightly coupled architecture.

   Imagine the Graph grows big as the project progresses and one day we got 
 a new requirement which makes us to introduce new node between existing 
 structure.
   Now this will have a cascading effect all over the graph. all the 
 existing traversals needs to be reworked to include the new node and 
 relationship.

   Which will have impact on all the components as the whole Graph is 
 connected.
   
   Am i missing anything ?

   Thanks,
   Shireesh.


 On Monday, 4 June 2012 09:20:54 UTC-5, Charles Bedon wrote:

 Hello

 For me the best advantage of using a NOSQL approach is its schema-less 
 nature. It's also a disadvantage if you consider that it's now your 
 responsibility to ensure the model integrity, but it gives you a lot of 
 freedom if you have to mess with it at runtime (I mean, if the application 
 requires it).

 -
 Charles Edward Bedón Cortázar
 Network Management, Data Analysis and Free Software 
 http://www.neotropic.co | Follow Neotropic on Twitter 
 http://twitter.com/neotropic_co
 Open Source Network Inventory for the masses!  
 http://kuwaiba.sourceforge.net | Follow Kuwaiba on Twitter 
 http://twitter.com/kuwaiba
 Linux Registered User #38


  Am Mon, 04 Jun 2012 08:40:45 -0500 *Johnny Weng Luu 
 johnny@gmail.com* schrieb  

 It's hard to imagine data with no relations.

 Sooner or later I think you would like to have relations between 
 different data entities.

 Everything is connected.

 Johnny

 On Monday, June 4, 2012 2:20:19 PM UTC+2, Radhakrishna Kalyan wrote:

 Hi 

 This was my first question to Peter on his presentation in Karlskrona in 
 2011 Dev-Con.

 As it always mentioned and I too realized that NoSql does not say not to 
 use relational database, but suggests to replace relational database with 
 Neo4J where one see relations(Complex/Non-Complex) among data.
 I hope you agree.

 I do agree that neo4j is not a silver bullet for every case.

 I see it like this: 
 I will* NOT *use Neo4J in an application if:

 1) The application have only tables with no relations among them. i.e No 
 foreign key relation among tables.
 2) If the application is a legacy application like Mainframes and DB2 
 containing stored procedures etc. where migrating to a new DB is a major 
 issue. 
 3) If the application code contains hard coded SQL queries to fetch the 
 data from the database which makes it hard to migrate.

 These are the few cases I found when I was looking to migrate my own 
 application built on Swing and SqlLite as backend. I used Sql queries with 
 in my code.

 I would have been saved if I would have used JPA. Because thanks to 
 Spring-Data-Neo4J where there is a support for cross storage. 
 It means that the application can persist to Neo4J and any relational db 
 using the same entity.
 Please consider looking to Spring-Data-Neo4J.

 Please comment if there is any misconception in my opinion.

 Kalyan




 On Mon, Jun 4, 2012 at 11:40 AM, Michel Domenjoud mdome...@octo.com 
 wrote:

 Hello,
 I'm currently working on graph databases as an RD subject, and I'm 
 looking for good references about graph databases pros and cons.
 I already watched and read a lot of good articles about graph databases, 
 about their position in the NoSQL ecosystem, and also some benchmarks and 
 performance comparison towards relational databases. So, I have a lot of 
 pros arguments to uses graph databases : good 

[Neo4j] Re: Feature request: 'virtual' relationships in result sets

2014-07-14 Thread Jason Gillman Jr.
I'm guessing you just want some indication that there's a path (or no path) 
between N1 and N2?

I guess a bit more context would help to determine what you're trying to do 
exactly - what's the use case?

On Monday, July 14, 2014 1:01:13 AM UTC-4, Mars Agliullin wrote:

 Hello, group

 I have a use case for 'virtual' (i.e. created on the fly, not persistent 
 in DB) relationships. Say, we're looking for pairs of nodes (n1), (n2) in 
 DB, that are related somehow (e.g. traversable from n1 to n2). We're not 
 interested in intermediate nodes or relationships between n1 and n2. 
 Besides n1 and n2 (and their pairing)  result set contains other 
 components; e.g.:

 match (n0)-[r]-(n1)-[*1..10]-(n2)
 where ...
 return n0, r, [n1, n2]

 If graph format is used for results (good for its brevity), we either get 
 the whole subgraph including components of all paths from n1 to n2, which 
 may be huge and is not needed, or lose pairing between n1 and n2. A better 
 alternative would be to return n1, n2 and a 'virtual' relationship from n1 
 to n2:

 match (n0)-[r]-(n1)-[*1..10]-(n2)
 where ...
 return n0, r, n1, n2, relationship(n1, n2, Some label, { name: Some 
 name })

 , where relationship() is a proposed function, returning 'virtual' 
 relationships.

 Any ideas?



-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread Michael Hunger
Sorry, can't see where you shared your code / queries / statements.

Michael

Am 14.07.2014 um 15:18 schrieb José Cornado jose.corn...@gmail.com:

 It is crashing before it gets where the original exception occurred.
 
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 
 at 
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(KernelTransactionImplementation.java:182)
 
 at 
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(KernelTransactionImplementation.java:63)
 
 at 
 org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance(ThreadToStatementContextBridge.java:47)
 
 at org.neo4j.kernel.impl.core.NodeProxy.addLabel(NodeProxy.java:468)
 
 at MY LOGIC
 
 at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:220)
 
 at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
 
 at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4166)
 
 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1466)
 
 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1489)
 
 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1474)
 
 at org.eclipse.swt.widgets.Widget.notifyListeners(Widget.java:1279)
 
 at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4012)
 
 at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3651)
 
 at 
 org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1113)
 
 at 
 org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
 
 at 
 org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:997)
 
 at 
 org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:138)
 
 at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:610)
 
 at 
 org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
 
 at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:567)
 
 at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
 
 at 
 org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124)
 
 at 
 org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
 
 at 
 org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
 
 at 
 org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
 
 at 
 org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:354)
 
 
 
 On Mon, Jul 14, 2014 at 6:52 AM, José Cornado jose.corn...@gmail.com wrote:
 Let .me reproduce it again. I was able to wrap 29,000 in a transaction (side 
 note)
 
 
 On Mon, Jul 14, 2014 at 12:19 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:
 Can you share your code and the full exception? As well as the 
 graph.db/messages.log file
 
 Thx
 
 Sent from mobile device
 
 Am 13.07.2014 um 19:34 schrieb José Cornado jose.corn...@gmail.com:
 
 Hello!
 
 I am running int the following:
 
 After inserting around 30,000 nodes to a graph, I close the operation with 
 transaction success. But the neo returns:
 
 Unable to commit transaction.
 
 Is there a hard limit on the size of a transaction? Doing it on a per node 
 basis is too slow.
 
 Thanks a lot!!!
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups Neo4j group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/neo4j/ABfjS4yZeJM/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 -- 
 José Cornado
 
 --
 
 home: http://www.efekctive.com
 blog:   http://blogging.efekctive.com
 --
 
 Everything has been said before, but since nobody listens we have to keep 
 going back and beginning all over again.
 
 Andre Gide
 
 
 
 -- 
 José Cornado
 
 --
 
 home: http://www.efekctive.com
 blog:   http://blogging.efekctive.com
 --
 
 Everything has been said before, but since nobody listens we have to keep 
 going back and beginning all over again.
 
 Andre Gide
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 messages.log

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: [Neo4j] Re: Feature request: 'virtual' relationships in result sets

2014-07-14 Thread Michael Hunger
You can create are more complex structure too, so no need to use the graph 
representation

e.g. return { start : n1, rel: MY_REL_TYPE, end: n2 }

Michael

Am 14.07.2014 um 17:11 schrieb Jason Gillman Jr. mackdaddydie...@gmail.com:

 I'm guessing you just want some indication that there's a path (or no path) 
 between N1 and N2?
 
 I guess a bit more context would help to determine what you're trying to do 
 exactly - what's the use case?
 
 On Monday, July 14, 2014 1:01:13 AM UTC-4, Mars Agliullin wrote:
 Hello, group
 
 I have a use case for 'virtual' (i.e. created on the fly, not persistent in 
 DB) relationships. Say, we're looking for pairs of nodes (n1), (n2) in DB, 
 that are related somehow (e.g. traversable from n1 to n2). We're not 
 interested in intermediate nodes or relationships between n1 and n2. Besides 
 n1 and n2 (and their pairing)  result set contains other components; e.g.:
 
 match (n0)-[r]-(n1)-[*1..10]-(n2)
 where ...
 return n0, r, [n1, n2]
 
 If graph format is used for results (good for its brevity), we either get the 
 whole subgraph including components of all paths from n1 to n2, which may be 
 huge and is not needed, or lose pairing between n1 and n2. A better 
 alternative would be to return n1, n2 and a 'virtual' relationship from n1 to 
 n2:
 
 match (n0)-[r]-(n1)-[*1..10]-(n2)
 where ...
 return n0, r, n1, n2, relationship(n1, n2, Some label, { name: Some name 
 })
 
 , where relationship() is a proposed function, returning 'virtual' 
 relationships.
 
 Any ideas?
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread Michael Hunger
Make sure your tx is not too big for your memory, according to your messages 
log your JVM only runs with a few megabytes of memory.

Try to limit your tx-size to 10k or 20k elements (nodes and rels) restarting 
the tx after calling 

if (count++ == 1) {
tx.success(); tx.close();
tx = db.beginTx();
count = 0;
}

Cheers,

Michael

Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:

 I managed to reproduce the same exception.
 
 Trace and log file are included. I will put together a test case in a few
 
 
 
 (org.neo4j.graphdb.TransactionFailureException) 
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
 
 
 
 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw 
 exception
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:498)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:122)
 
 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
 
 at endTransaction(...)
 
 ... 31 more
 
 Caused by: javax.transaction.xa.XAException
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:553)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)
 
 ... 38 more
 
 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
 
 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)
 
 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)
 
 at org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:212)
 
 at 
 org.apache.lucene.index.StoredFieldsWriterPerThread.addField(StoredFieldsWriterPerThread.java:58)
 
 at 
 org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:265)
 
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:766)
 
 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2060)
 
 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2034)
 
 at 
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(LuceneIndexAccessor.java:151)
 
 at 
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(LuceneIndexAccessor.java:186)
 
 at 
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(FlippableIndexProxy.java:337)
 
 at 
 org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(ContractCheckingIndexProxy.java:102)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(IndexingService.java:411)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(IndexingService.java:359)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(IndexingService.java:310)
 
 at 
 org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(WriteTransaction.java:817)
 
 at 
 org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(WriteTransaction.java:751)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:322)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:530)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:446)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:545)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:122)
 
 
 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
 
 - endTransaction(...)
 
 
 
 On Mon, Jul 14, 2014 at 7:18 AM, José Cornado jose.corn...@gmail.com wrote:
 It is crashing before it gets where the original exception occurred.
 
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 
 at 
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(KernelTransactionImplementation.java:182)
 
 at 
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(KernelTransactionImplementation.java:63)
 
 at 
 org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance(ThreadToStatementContextBridge.java:47)
 
 at org.neo4j.kernel.impl.core.NodeProxy.addLabel(NodeProxy.java:468)
 
 at MY LOGIC
 
 at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:220)
 
 at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
 
 at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4166)
 
 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1466)
 
 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1489)
 
 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1474)
 
 at 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
that I know. What is the cost per element (node/rels) in a transaction? few
kb?


On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
michael.hun...@neotechnology.com wrote:

 Make sure your tx is not too big for your memory, according to your
 messages log your JVM only runs with a few megabytes of memory.

 Try to limit your tx-size to 10k or 20k elements (nodes and rels)
 restarting the tx after calling

 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }

 Cheers,

 Michael

 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:

 I managed to reproduce the same exception.

 Trace and log file are included. I will put together a test case in a few


 (org.neo4j.graphdb.TransactionFailureException)
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction


 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw
 exception

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:498)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124
 )

 at endTransaction(...)

 ... 31 more

 Caused by: javax.transaction.xa.XAException

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:553)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)

 ... 38 more

 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)

 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)

 at org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:212)

 at org.apache.lucene.index.StoredFieldsWriterPerThread.addField(
 StoredFieldsWriterPerThread.java:58)

 at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(
 DocFieldProcessorPerThread.java:265)

 at org.apache.lucene.index.DocumentsWriter.updateDocument(
 DocumentsWriter.java:766)

 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2060)

 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2034)

 at org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(
 LuceneIndexAccessor.java:151)

 at
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(
 LuceneIndexAccessor.java:186)

 at
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(
 FlippableIndexProxy.java:337)

 at org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(
 ContractCheckingIndexProxy.java:102)

 at
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(
 IndexingService.java:411)

 at org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(
 IndexingService.java:359)

 at org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(
 IndexingService.java:310)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(
 WriteTransaction.java:817)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(
 WriteTransaction.java:751)

 at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(
 XaTransaction.java:322)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(
 XaResourceManager.java:530)

 at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(
 XaResourceManager.java:446)

 at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(
 XaResourceHelpImpl.java:64)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:545)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124
 )

 - endTransaction(...)


 On Mon, Jul 14, 2014 at 7:18 AM, José Cornado jose.corn...@gmail.com
 wrote:

 It is crashing before it gets where the original exception occurred.

 java.lang.OutOfMemoryError: GC overhead limit exceeded

 at
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(
 KernelTransactionImplementation.java:182)

 at
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(
 KernelTransactionImplementation.java:63)

 at org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance(
 ThreadToStatementContextBridge.java:47)

 at org.neo4j.kernel.impl.core.NodeProxy.addLabel(NodeProxy.java:468)

 at MY LOGIC

 at org.eclipse.swt.widgets.TypedListener.handleEvent(
 TypedListener.java:220)

 at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)

 at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4166)

 at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1466)

 at 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
The data is fed to an engine that is able to process work in the millions.
The number of relations is really low compared to the number of nodes so
the cost of node per transaction is the driver.

Thanks!


On Mon, Jul 14, 2014 at 9:27 AM, José Cornado jose.corn...@gmail.com
wrote:

 that I know. What is the cost per element (node/rels) in a transaction?
 few kb?


 On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 Make sure your tx is not too big for your memory, according to your
 messages log your JVM only runs with a few megabytes of memory.

 Try to limit your tx-size to 10k or 20k elements (nodes and rels)
 restarting the tx after calling

 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }

 Cheers,

 Michael

 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:

 I managed to reproduce the same exception.

 Trace and log file are included. I will put together a test case in a few


 (org.neo4j.graphdb.TransactionFailureException)
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction


 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw
 exception

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:498)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(
 TopLevelTransaction.java:124)

 at endTransaction(...)

 ... 31 more

 Caused by: javax.transaction.xa.XAException

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:553)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)

 ... 38 more

 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)

 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)

 at org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:212)

 at org.apache.lucene.index.StoredFieldsWriterPerThread.addField(
 StoredFieldsWriterPerThread.java:58)

 at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(
 DocFieldProcessorPerThread.java:265)

 at org.apache.lucene.index.DocumentsWriter.updateDocument(
 DocumentsWriter.java:766)

 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2060)

 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2034)

 at org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(
 LuceneIndexAccessor.java:151)

 at
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(
 LuceneIndexAccessor.java:186)

 at
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(
 FlippableIndexProxy.java:337)

 at org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(
 ContractCheckingIndexProxy.java:102)

 at
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(
 IndexingService.java:411)

 at org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(
 IndexingService.java:359)

 at org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(
 IndexingService.java:310)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(
 WriteTransaction.java:817)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(
 WriteTransaction.java:751)

 at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(
 XaTransaction.java:322)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(
 XaResourceManager.java:530)

 at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(
 XaResourceManager.java:446)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(
 XaResourceHelpImpl.java:64)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:545)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(
 TopLevelTransaction.java:124)

 - endTransaction(...)


 On Mon, Jul 14, 2014 at 7:18 AM, José Cornado jose.corn...@gmail.com
 wrote:

 It is crashing before it gets where the original exception occurred.

 java.lang.OutOfMemoryError: GC overhead limit exceeded

 at
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(
 KernelTransactionImplementation.java:182)

 at
 org.neo4j.kernel.impl.api.KernelTransactionImplementation.acquireStatement(
 KernelTransactionImplementation.java:63)

 at org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance(
 ThreadToStatementContextBridge.java:47)

 at org.neo4j.kernel.impl.core.NodeProxy.addLabel(NodeProxy.java:468)

 at MY LOGIC

 at 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread Michael Hunger
In my experience a it depends on what you do, I had good results with tx-sizes 
from 1k to 30k

But if you need to import millions at once, you might want to look into 
batch-insertion, see http://neo4j.org/develop/import

Cheers,

Michael

Am 14.07.2014 um 17:32 schrieb José Cornado jose.corn...@gmail.com:

 The data is fed to an engine that is able to process work in the millions. 
 The number of relations is really low compared to the number of nodes so the 
 cost of node per transaction is the driver.
 
 Thanks!
 
 
 On Mon, Jul 14, 2014 at 9:27 AM, José Cornado jose.corn...@gmail.com wrote:
 that I know. What is the cost per element (node/rels) in a transaction? few 
 kb?
 
 
 On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:
 Make sure your tx is not too big for your memory, according to your messages 
 log your JVM only runs with a few megabytes of memory.
 
 Try to limit your tx-size to 10k or 20k elements (nodes and rels) restarting 
 the tx after calling 
 
 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }
 
 Cheers,
 
 Michael
 
 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:
 
 I managed to reproduce the same exception.
 
 Trace and log file are included. I will put together a test case in a few
 
 
 
 (org.neo4j.graphdb.TransactionFailureException) 
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
 
 
 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw 
 exception
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:498)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:122)
 
 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
 
 at endTransaction(...)
 
 ... 31 more
 
 Caused by: javax.transaction.xa.XAException
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:553)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)
 
 ... 38 more
 
 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
 
 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)
 
 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)
 
 at org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:212)
 
 at 
 org.apache.lucene.index.StoredFieldsWriterPerThread.addField(StoredFieldsWriterPerThread.java:58)
 
 at 
 org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:265)
 
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:766)
 
 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2060)
 
 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2034)
 
 at 
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(LuceneIndexAccessor.java:151)
 
 at 
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(LuceneIndexAccessor.java:186)
 
 at 
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(FlippableIndexProxy.java:337)
 
 at 
 org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(ContractCheckingIndexProxy.java:102)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(IndexingService.java:411)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(IndexingService.java:359)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(IndexingService.java:310)
 
 at 
 org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(WriteTransaction.java:817)
 
 at 
 org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(WriteTransaction.java:751)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:322)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(XaResourceManager.java:530)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:446)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:545)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:122)
 
 
 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
 
 - endTransaction(...)
 
 
 
 On Mon, Jul 14, 2014 at 7:18 AM, José Cornado jose.corn...@gmail.com wrote:
 It is crashing before it gets where the original exception occurred.
 
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 
 at 
 

Re: [Neo4j] Pros and cons about graph databses and especially Neo4j

2014-07-14 Thread shireesh adla
Thanks Benjamin.
I can now connect on what you explained about schema.
Schema less is in terms of a Node and i [wrongly] assumed it for the Graph.

Now coming to application architecture problem.
as we know that we cannot freeze our architecture due to ever changing
requirements, what we can do to make it more flexible.

i can understand its a different problem altogether, but can we come up
with a  best-practice graph structure which can handle these kind of
scenarios.
following which will give you a more flexible graph structure which will be
resilient to new changes.

Shireesh.



On Mon, Jul 14, 2014 at 10:02 AM, Benjamin Makus benne...@gmail.com wrote:

 That is a problem in your applications architecture. If you use MySQL and
 have a on-to-many relation between A and B, and now you need to store an
 entity C between A and B, you've got to alter the schema and run an update
 on all entries.
 (if you can tell us a solution that works in SQL, than there's a 99%
 chance that it works, too, in Neo4j)

 Schemaless means, that each node (and relation) can store whatever you
 want it to. Node 1 can have { a: true, b: B } and node 2 can have { a:
 42, b: [A, B], c: false }. So there's no schema, that means: you can't
 say all a-properties are of type boolean, and you can't say every node has
 3 properties.

 Btw: If you've got a need, to add a new node between some existing nodes,
 then Neo4j won't care. You can do whatever you want to:
 Node 1 is related to Node 2
 and after the update you can have: Node 1 is related to Node 2 and Node 1
 is related to Node 3, which is related to Node 2. No problem.

 Again: There's no schema that says, that Node 1 can only have 1 relation,
 it can have as many you like and it can relate to every other node, no
 matter what this node is in your application.

 For Neo4j, all your data are just nodes, nothing more. They've got no type
 and arbitrary content. If your application says that Node 1 is a Car and
 Node 2 is a CryptoKey, you can still tell the database to relate them.

 Am Montag, 14. Juli 2014 13:59:44 UTC+2 schrieb Shireesh:


   I am still confused with *schema-less nature.*

   As as can see it, still Neo4j gives us tightly coupled architecture.

   Imagine the Graph grows big as the project progresses and one day we
 got a new requirement which makes us to introduce new node between existing
 structure.
   Now this will have a cascading effect all over the graph. all the
 existing traversals needs to be reworked to include the new node and
 relationship.

   Which will have impact on all the components as the whole Graph is
 connected.

   Am i missing anything ?

   Thanks,
   Shireesh.


 On Monday, 4 June 2012 09:20:54 UTC-5, Charles Bedon wrote:

 Hello

 For me the best advantage of using a NOSQL approach is its schema-less
 nature. It's also a disadvantage if you consider that it's now your
 responsibility to ensure the model integrity, but it gives you a lot of
 freedom if you have to mess with it at runtime (I mean, if the application
 requires it).

 -
 Charles Edward Bedón Cortázar
 Network Management, Data Analysis and Free Software
 http://www.neotropic.co | Follow Neotropic on Twitter
 http://twitter.com/neotropic_co
 Open Source Network Inventory for the masses!
 http://kuwaiba.sourceforge.net | Follow Kuwaiba on Twitter
 http://twitter.com/kuwaiba
 Linux Registered User #38


  Am Mon, 04 Jun 2012 08:40:45 -0500 *Johnny Weng Luu
 johnny@gmail.com* schrieb 

 It's hard to imagine data with no relations.

 Sooner or later I think you would like to have relations between
 different data entities.

 Everything is connected.

 Johnny

 On Monday, June 4, 2012 2:20:19 PM UTC+2, Radhakrishna Kalyan wrote:

 Hi

 This was my first question to Peter on his presentation in Karlskrona in
 2011 Dev-Con.

 As it always mentioned and I too realized that NoSql does not say not to
 use relational database, but suggests to replace relational database with
 Neo4J where one see relations(Complex/Non-Complex) among data.
 I hope you agree.

 I do agree that neo4j is not a silver bullet for every case.

 I see it like this:
 I will* NOT *use Neo4J in an application if:

 1) The application have only tables with no relations among them. i.e No
 foreign key relation among tables.
 2) If the application is a legacy application like Mainframes and DB2
 containing stored procedures etc. where migrating to a new DB is a major
 issue.
 3) If the application code contains hard coded SQL queries to fetch the
 data from the database which makes it hard to migrate.

 These are the few cases I found when I was looking to migrate my own
 application built on Swing and SqlLite as backend. I used Sql queries with
 in my code.

 I would have been saved if I would have used JPA. Because thanks to
 Spring-Data-Neo4J where there is a support for cross storage.
 It means that the application can persist to Neo4J and any relational db
 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
I am not importing. I am creating. These were very simple (one value == one
node) It can be more complex than that. Where can I find docs about the
memory usage and setup?


On Mon, Jul 14, 2014 at 9:35 AM, Michael Hunger 
michael.hun...@neotechnology.com wrote:

 In my experience a it depends on what you do, I had good results with
 tx-sizes from 1k to 30k

 But if you need to import millions at once, you might want to look into
 batch-insertion, see http://neo4j.org/develop/import

 Cheers,

 Michael

 Am 14.07.2014 um 17:32 schrieb José Cornado jose.corn...@gmail.com:

 The data is fed to an engine that is able to process work in the millions.
 The number of relations is really low compared to the number of nodes so
 the cost of node per transaction is the driver.

 Thanks!


 On Mon, Jul 14, 2014 at 9:27 AM, José Cornado jose.corn...@gmail.com
 wrote:

 that I know. What is the cost per element (node/rels) in a transaction?
 few kb?


 On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 Make sure your tx is not too big for your memory, according to your
 messages log your JVM only runs with a few megabytes of memory.

 Try to limit your tx-size to 10k or 20k elements (nodes and rels)
 restarting the tx after calling

 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }

 Cheers,

 Michael

 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:

 I managed to reproduce the same exception.

 Trace and log file are included. I will put together a test case in a few


 (org.neo4j.graphdb.TransactionFailureException)
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction


 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw
 exception

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:498
 )

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397
 )

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(
 TopLevelTransaction.java:124)

 at endTransaction(...)

 ... 31 more

 Caused by: javax.transaction.xa.XAException

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:553)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460
 )

 ... 38 more

 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)

 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)

 at org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:212
 )

 at org.apache.lucene.index.StoredFieldsWriterPerThread.addField(
 StoredFieldsWriterPerThread.java:58)

 at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(
 DocFieldProcessorPerThread.java:265)

 at org.apache.lucene.index.DocumentsWriter.updateDocument(
 DocumentsWriter.java:766)

 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2060
 )

 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2034
 )

 at org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(
 LuceneIndexAccessor.java:151)

 at
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(
 LuceneIndexAccessor.java:186)

 at
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(
 FlippableIndexProxy.java:337)

 at org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(
 ContractCheckingIndexProxy.java:102)

 at
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(
 IndexingService.java:411)

 at org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(
 IndexingService.java:359)

 at org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(
 IndexingService.java:310)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(
 WriteTransaction.java:817)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(
 WriteTransaction.java:751)

 at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(
 XaTransaction.java:322)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(
 XaResourceManager.java:530)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(
 XaResourceManager.java:446)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(
 XaResourceHelpImpl.java:64)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:545)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460
 )

 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397
 )

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(
 TopLevelTransaction.java:124)

 - endTransaction(...)


 On Mon, Jul 14, 2014 at 7:18 AM, José Cornado jose.corn...@gmail.com
 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
FYI,

I ran a little, crude experiment. I increased the -Xmx (max heap size)
option from 512 to 1024 mb while incrementing the number of nodes added to
the graph. It crashed around 78K nodes.

So a crude guideline to use is (78k - 30K)/512MB. In case you want to
minimize the number of transactions used.


On Mon, Jul 14, 2014 at 9:45 AM, José Cornado jose.corn...@gmail.com
wrote:

 I am not importing. I am creating. These were very simple (one value ==
 one node) It can be more complex than that. Where can I find docs about the
 memory usage and setup?


 On Mon, Jul 14, 2014 at 9:35 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 In my experience a it depends on what you do, I had good results with
 tx-sizes from 1k to 30k

 But if you need to import millions at once, you might want to look into
 batch-insertion, see http://neo4j.org/develop/import

 Cheers,

 Michael

 Am 14.07.2014 um 17:32 schrieb José Cornado jose.corn...@gmail.com:

 The data is fed to an engine that is able to process work in the
 millions. The number of relations is really low compared to the number of
 nodes so the cost of node per transaction is the driver.

 Thanks!


 On Mon, Jul 14, 2014 at 9:27 AM, José Cornado jose.corn...@gmail.com
 wrote:

 that I know. What is the cost per element (node/rels) in a transaction?
 few kb?


 On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 Make sure your tx is not too big for your memory, according to your
 messages log your JVM only runs with a few megabytes of memory.

 Try to limit your tx-size to 10k or 20k elements (nodes and rels)
 restarting the tx after calling

 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }

 Cheers,

 Michael

 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:

 I managed to reproduce the same exception.

 Trace and log file are included. I will put together a test case in a
 few


 (org.neo4j.graphdb.TransactionFailureException)
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction


 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw
 exception

 at org.neo4j.kernel.impl.transaction.TxManager.commit(
 TxManager.java:498)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(
 TxManager.java:397)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(
 TopLevelTransaction.java:124)

 at endTransaction(...)

 ... 31 more

 Caused by: javax.transaction.xa.XAException

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:553)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(
 TxManager.java:460)

 ... 38 more

 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)

 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)

 at org.apache.lucene.index.FieldsWriter.writeField(
 FieldsWriter.java:212)

 at org.apache.lucene.index.StoredFieldsWriterPerThread.addField(
 StoredFieldsWriterPerThread.java:58)

 at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(
 DocFieldProcessorPerThread.java:265)

 at org.apache.lucene.index.DocumentsWriter.updateDocument(
 DocumentsWriter.java:766)

 at org.apache.lucene.index.IndexWriter.addDocument(
 IndexWriter.java:2060)

 at org.apache.lucene.index.IndexWriter.addDocument(
 IndexWriter.java:2034)

 at org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(
 LuceneIndexAccessor.java:151)

 at
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(
 LuceneIndexAccessor.java:186)

 at
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(
 FlippableIndexProxy.java:337)

 at org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(
 ContractCheckingIndexProxy.java:102)

 at
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(
 IndexingService.java:411)

 at org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(
 IndexingService.java:359)

 at org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(
 IndexingService.java:310)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(
 WriteTransaction.java:817)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(
 WriteTransaction.java:751)

 at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(
 XaTransaction.java:322)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commitWriteTx(
 XaResourceManager.java:530)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(
 XaResourceManager.java:446)

 at
 org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(
 XaResourceHelpImpl.java:64)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:545)

 at 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread Michael Hunger
Yep, makes sense, in general it is sensible to stay in the 30k range so you get 
the best utilization between tx-commit disk-flushes and memory usage of large 
tx.
See: 
http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/


Am 14.07.2014 um 18:59 schrieb José Cornado jose.corn...@gmail.com:

 FYI,
 
 I ran a little, crude experiment. I increased the -Xmx (max heap size) option 
 from 512 to 1024 mb while incrementing the number of nodes added to the 
 graph. It crashed around 78K nodes.
 
 So a crude guideline to use is (78k - 30K)/512MB. In case you want to 
 minimize the number of transactions used.
 
 
 On Mon, Jul 14, 2014 at 9:45 AM, José Cornado jose.corn...@gmail.com wrote:
 I am not importing. I am creating. These were very simple (one value == one 
 node) It can be more complex than that. Where can I find docs about the 
 memory usage and setup?
 
 
 On Mon, Jul 14, 2014 at 9:35 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:
 In my experience a it depends on what you do, I had good results with 
 tx-sizes from 1k to 30k
 
 But if you need to import millions at once, you might want to look into 
 batch-insertion, see http://neo4j.org/develop/import
 
 Cheers,
 
 Michael
 
 Am 14.07.2014 um 17:32 schrieb José Cornado jose.corn...@gmail.com:
 
 The data is fed to an engine that is able to process work in the millions. 
 The number of relations is really low compared to the number of nodes so the 
 cost of node per transaction is the driver.
 
 Thanks!
 
 
 On Mon, Jul 14, 2014 at 9:27 AM, José Cornado jose.corn...@gmail.com wrote:
 that I know. What is the cost per element (node/rels) in a transaction? few 
 kb?
 
 
 On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:
 Make sure your tx is not too big for your memory, according to your messages 
 log your JVM only runs with a few megabytes of memory.
 
 Try to limit your tx-size to 10k or 20k elements (nodes and rels) restarting 
 the tx after calling 
 
 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }
 
 Cheers,
 
 Michael
 
 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:
 
 I managed to reproduce the same exception.
 
 Trace and log file are included. I will put together a test case in a few
 
 
 
 (org.neo4j.graphdb.TransactionFailureException) 
 org.neo4j.graphdb.TransactionFailureException: Unable to commit transaction
 
 
 Caused by: org.neo4j.graphdb.TransactionFailureException: commit threw 
 exception
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:498)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:397)
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:122)
 
 at org.neo4j.kernel.TopLevelTransaction.close(TopLevelTransaction.java:124)
 
 at endTransaction(...)
 
 ... 31 more
 
 Caused by: javax.transaction.xa.XAException
 
 at 
 org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:553)
 
 at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:460)
 
 ... 38 more
 
 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
 
 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)
 
 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)
 
 at org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:212)
 
 at 
 org.apache.lucene.index.StoredFieldsWriterPerThread.addField(StoredFieldsWriterPerThread.java:58)
 
 at 
 org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:265)
 
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:766)
 
 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2060)
 
 at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2034)
 
 at 
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(LuceneIndexAccessor.java:151)
 
 at 
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(LuceneIndexAccessor.java:186)
 
 at 
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(FlippableIndexProxy.java:337)
 
 at 
 org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(ContractCheckingIndexProxy.java:102)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(IndexingService.java:411)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(IndexingService.java:359)
 
 at 
 org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(IndexingService.java:310)
 
 at 
 org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(WriteTransaction.java:817)
 
 at 
 org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(WriteTransaction.java:751)
 
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:322)
 
 at 
 

Re: [Neo4j] Performance/Limits of closing a transaction after big insert

2014-07-14 Thread José Cornado
ok.


On Mon, Jul 14, 2014 at 11:43 AM, Michael Hunger 
michael.hun...@neotechnology.com wrote:

 Yep, makes sense, in general it is sensible to stay in the 30k range so
 you get the best utilization between tx-commit disk-flushes and memory
 usage of large tx.
 See:
 http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/


 Am 14.07.2014 um 18:59 schrieb José Cornado jose.corn...@gmail.com:

 FYI,

 I ran a little, crude experiment. I increased the -Xmx (max heap size)
 option from 512 to 1024 mb while incrementing the number of nodes added to
 the graph. It crashed around 78K nodes.

 So a crude guideline to use is (78k - 30K)/512MB. In case you want to
 minimize the number of transactions used.


 On Mon, Jul 14, 2014 at 9:45 AM, José Cornado jose.corn...@gmail.com
 wrote:

 I am not importing. I am creating. These were very simple (one value ==
 one node) It can be more complex than that. Where can I find docs about the
 memory usage and setup?


 On Mon, Jul 14, 2014 at 9:35 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 In my experience a it depends on what you do, I had good results with
 tx-sizes from 1k to 30k

 But if you need to import millions at once, you might want to look into
 batch-insertion, see http://neo4j.org/develop/import

 Cheers,

 Michael

 Am 14.07.2014 um 17:32 schrieb José Cornado jose.corn...@gmail.com:

 The data is fed to an engine that is able to process work in the
 millions. The number of relations is really low compared to the number of
 nodes so the cost of node per transaction is the driver.

 Thanks!


 On Mon, Jul 14, 2014 at 9:27 AM, José Cornado jose.corn...@gmail.com
 wrote:

 that I know. What is the cost per element (node/rels) in a transaction?
 few kb?


 On Mon, Jul 14, 2014 at 9:24 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 Make sure your tx is not too big for your memory, according to your
 messages log your JVM only runs with a few megabytes of memory.

 Try to limit your tx-size to 10k or 20k elements (nodes and rels)
 restarting the tx after calling

 if (count++ == 1) {
 tx.success(); tx.close();
 tx = db.beginTx();
 count = 0;
 }

 Cheers,

 Michael

 Am 14.07.2014 um 17:21 schrieb José Cornado jose.corn...@gmail.com:

 I managed to reproduce the same exception.

 Trace and log file are included. I will put together a test case in a
 few


 (org.neo4j.graphdb.TransactionFailureException)
 org.neo4j.graphdb.TransactionFailureException: Unable to commit 
 transaction


 Caused by: org.neo4j.graphdb.TransactionFailureException: commit
 threw exception

 at org.neo4j.kernel.impl.transaction.TxManager.commit(
 TxManager.java:498)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(
 TxManager.java:397)

 at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(
 TransactionImpl.java:122)

 at org.neo4j.kernel.TopLevelTransaction.close(
 TopLevelTransaction.java:124)

 at endTransaction(...)

 ... 31 more

 Caused by: javax.transaction.xa.XAException

 at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(
 TransactionImpl.java:553)

 at org.neo4j.kernel.impl.transaction.TxManager.commit(
 TxManager.java:460)

 ... 38 more

 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

 at org.apache.lucene.util.BytesRef.init(BytesRef.java:77)

 at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:111)

 at org.apache.lucene.index.FieldsWriter.writeField(
 FieldsWriter.java:212)

 at org.apache.lucene.index.StoredFieldsWriterPerThread.addField(
 StoredFieldsWriterPerThread.java:58)

 at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(
 DocFieldProcessorPerThread.java:265)

 at org.apache.lucene.index.DocumentsWriter.updateDocument(
 DocumentsWriter.java:766)

 at org.apache.lucene.index.IndexWriter.addDocument(
 IndexWriter.java:2060)

 at org.apache.lucene.index.IndexWriter.addDocument(
 IndexWriter.java:2034)

 at org.neo4j.kernel.api.impl.index.LuceneIndexAccessor.add(
 LuceneIndexAccessor.java:151)

 at
 org.neo4j.kernel.api.impl.index.LuceneIndexAccessor$LuceneIndexUpdater.process(
 LuceneIndexAccessor.java:186)

 at
 org.neo4j.kernel.impl.api.index.FlippableIndexProxy$LockingIndexUpdater.process(
 FlippableIndexProxy.java:337)

 at
 org.neo4j.kernel.impl.api.index.ContractCheckingIndexProxy$1.process(
 ContractCheckingIndexProxy.java:102)

 at
 org.neo4j.kernel.impl.api.index.IndexingService.processUpdateIfIndexExists(
 IndexingService.java:411)

 at org.neo4j.kernel.impl.api.index.IndexingService.applyUpdates(
 IndexingService.java:359)

 at org.neo4j.kernel.impl.api.index.IndexingService.updateIndexes(
 IndexingService.java:310)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.applyCommit(
 WriteTransaction.java:817)

 at org.neo4j.kernel.impl.nioneo.xa.WriteTransaction.doCommit(
 WriteTransaction.java:751)

 at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(
 XaTransaction.java:322)

 at
 

[Neo4j] Neo4j RPM / yum repository?

2014-07-14 Thread Alan Robertson

You've had .deb packages of Neo4j for a long time.

Do you have a YUM (RPM) repository for Neo4j?

--
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim from you 
at all times your undisguised opinions. - William Wilberforce

--
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Neo4j RPM / yum repository?

2014-07-14 Thread Wes Freeman
This one:
http://yum.neo4j.org/

On Mon, Jul 14, 2014 at 4:25 PM, Alan Robertson al...@unix.sh wrote:

 You've had .deb packages of Neo4j for a long time.

 Do you have a YUM (RPM) repository for Neo4j?

 --
 Alan Robertson al...@unix.sh - @OSSAlanR

 Openness is the foundation and preservative of friendship...  Let me
 claim from you at all times your undisguised opinions. - William
 Wilberforce

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Neo4j RPM / yum repository?

2014-07-14 Thread Alan Robertson

It says it's not suitable for production work.  Is that still true?


On 07/14/2014 02:29 PM, Wes Freeman wrote:

This one:
http://yum.neo4j.org/

On Mon, Jul 14, 2014 at 4:25 PM, Alan Robertson al...@unix.sh 
mailto:al...@unix.sh wrote:


You've had .deb packages of Neo4j for a long time.

Do you have a YUM (RPM) repository for Neo4j?

-- 
Alan Robertson al...@unix.sh - @OSSAlanR


Openness is the foundation and preservative of friendship...  Let
me claim from you at all times your undisguised opinions. -
William Wilberforce

-- 
You received this message because you are subscribed to the Google

Groups Neo4j group.
To unsubscribe from this group and stop receiving emails from it,
send an email to neo4j+unsubscr...@googlegroups.com
mailto:neo4j%2bunsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to neo4j+unsubscr...@googlegroups.com 
mailto:neo4j+unsubscr...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Alan Robertson al...@unix.sh - @OSSAlanR

Openness is the foundation and preservative of friendship...  Let me claim from you 
at all times your undisguised opinions. - William Wilberforce

--
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] insert will impact read? why? Any methods?

2014-07-14 Thread Michael Hunger
Sorry, to help you you have to share your code with us.


Am 10.07.2014 um 11:58 schrieb Handy yu handy198...@gmail.com:

 Hi,
 
 I use embedded graph, and new wrappingneoserverbootstrapper.
 
 I insert data into neo4j embedded graph with java core api, and while i query 
 sth via web browser, sth like start node=node(*) return count(node) or 
 start node=node(1002) return node.
 
 Sometimes i got unknown error from web browser after a short long 
 time(about XX seconds).
 
 Sometimes it gives me the result ,but costs too many time.
 
 I wanna know why and how to handle it.
 
 By the way, i change the value of webserver_max_threads_property_key into 
 400.
 
 Many thanks.
 
 Geets.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] [Neo4j - Load CSV ] create not enough nodes in csv file.

2014-07-14 Thread Michael Hunger
No that should have no influence.
Do you have the users already in your db?

Do you have  and index on :ThanhVien(id) and :DiaDiem(id) ?

try

 LOAD CSV FROM file:E:/binhluan.tsv AS line FIELDTERMINATOR '\t'
with line limit 100
return line;

and
 LOAD CSV FROM file:E:/binhluan.tsv AS line FIELDTERMINATOR '\t'
return count(*);

 LOAD CSV FROM file:E:/binhluan.tsv AS line FIELDTERMINATOR '\t'
with line limit 100
 MATCH ( a:`ThanhVien`{ id:line[4] } ), ( b:`DiaDiem`{ id:line[2] } )
return a,b


Am 09.07.2014 um 06:40 schrieb Nguyen Minh Nhut nguyenminhnhutk...@gmail.com:

 Hi Micheal!
 I have a .tsv file. I clean all the characters ' and  in that file.
 When i run the load csv command then the console load it, and then do not 
 thing :
 
 USING PERIODIC COMMIT 100
 LOAD CSV FROM file:E:/binhluan.tsv AS line FIELDTERMINATOR '\t'
 MATCH ( a:`ThanhVien`{ id:line[4] } ), ( b:`DiaDiem`{ id:line[2] } )
 CREATE (a)-[:BinhLuan{ NoiDung:line[1]}]-(b);
 
 I don't know why, is this b/c content of line[1] is too big.
 
 
 On Sat, Jul 5, 2014 at 7:45 AM, Nguyen Minh Nhut 
 nguyenminhnhutk...@gmail.com wrote:
 OK Thks!
 
 
 
 On Sat, Jul 5, 2014 at 3:25 AM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:
 I just deleted all quotes and the emojii smileys ;)
 
 Sent from mobile device
 
 Am 04.07.2014 um 17:10 schrieb Nguyen Minh Nhut 
 nguyenminhnhutk...@gmail.com:
 
 I have escaped all the single quotes  by double them and delete all the 
 smileys like you said. Now when i count(*), it reuturns 30375 but not 41879 
 nodes.
 Could you tell me, what characters I should check that affect my results.
 Sorry about my bad English.
   
 
 
 On Fri, Jul 4, 2014 at 5:28 PM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:
 You have several single quotes in your file, which cause our csv reader to 
 continue to read until it finds the next quote.
 
 You have to escape those by doubling them.
 
 Many of them are smileys like this :)
 
 LOAD CSV FROM file:///Users/mh/Downloads/thanhvien.csv AS line return 
 count(*);
 
 +--+
 
 | count(*) |
 
 +--+
 
 | 41881|
 
 +--+
 
 1 row
 
 
 
 On Fri, Jul 4, 2014 at 4:57 AM, Nguyen Minh Nhut 
 nguyenminhnhutk...@gmail.com wrote:
 Hi good guys!
 My .csv file have 41879 nodes. When i run the load csv file it just create 
 13415 nodes.
 
 This is my cypher command:
 
 USING PERIODIC COMMIT 1000 LOAD CSV FROM file:E:/thanhvien.csv AS line 
 create (u:ThanhVien { id: line[0], HoTen : line[1] , GioiTinh: line[2], 
 NgaySinh: line[3] , GioiThieuBanThan: line[4], DienThoai: line[5] , DiaChi: 
 line[6] , Email: [7] , NgheNghiep: line[7] , CoQuan: line[8] , SoThichKhac: 
 line[9] , Quan: line[10] , TinhTrangHonNhan: line[11] , HinhDaiDien: 
 line[12] } );
 
 Don't know why!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups Neo4j group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/neo4j/-S1Xdh4ZkrE/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups Neo4j group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/neo4j/-S1Xdh4ZkrE/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Correct @Query Syntax?

2014-07-14 Thread Michael Hunger
You are missing @GraphId Long id; 
in your class

for your query:

 match (game:Game)-[:follows]-(follower:Account) where id(game) = {0} return 
 follower.accountId


Am 07.07.2014 um 19:57 schrieb apprentice...@googlemail.com:

 Trying retrieve all acocuntId's that have 'follows' relationship with game, I 
 may have missed something in Query (line 16) as it is returning empty.
 
 Class Account{
  
 private Long accountId;
  
 @Fetch
 @RelatedTo(type=follows,
 direction=Direction.OUTGOING,
 elementClass = Game.class)
 private SetGame followingGame;
  
 }
 
  
 Repository Interface
 public interface GameRepository extends GraphRepositoryGame{
 //Return all account id's following game
 @Query(start game=node(*) match (game)-[:follows]-(follower) where 
 game.id=({0}) return follower.accountId)
 public IterableLong ids(Long gameId);

 }
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Neo4j create multiple nodes

2014-07-14 Thread Michael Hunger
Perhaps you can tell us a bit more about your domain and your data volumes 
upfront?

Check out these links for some more info:

http://neo4j.org/develop/import
http://jexp.de/blog/2014/06/load-csv-into-neo4j-quickly-and-successfully/
http://jexp.de/blog/2013/05/on-importing-data-in-neo4j-blog-series/

Am 07.07.2014 um 15:14 schrieb Sagun Pai sagung@gmail.com:

 Hello all!
 
 I am using Neo4j to build a huge graph database (over a million nodes). The 
 way I'm doing it right now is running a cypher CREATE (n{property='value'}) 
 query for each of the node. As expected, this is quite an inefficient method 
 and it takes lots of time. Can somebody suggest me some alternative method to 
 overcome this issue? I've heard that Neo4j also provides some default batch 
 interface to create multiple nodes. Thanks in advance! :)
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Is there any plan to improve SDN based on REST mode?

2014-07-14 Thread Michael Azerhad
Hi Michael,

Thanks for your answer. 

Currently, using the REST mode, I wrote all my queries (at Read side) using 
pure Cypher (without SDN at all). 
However, I'm still using SDN to query some data that my Command strictly 
needs, like retrieving entities (Indeed, I separated read and commands 
(CQ(R)S) in my application).

I think the best way would be to write cypher queries for some complex 
parts of the commands too. (like an Entity having 4 collections annotated 
with @Fetch).
Even if it would break Entity data encapsulation, like I evoked above...

Writing an unmanaged extension would be hard to manage IMHO, since it would 
split the application into more parts.

Thanks a lot :)

Michael

On Tuesday, July 15, 2014 2:05:58 AM UTC+2, Michael Hunger wrote:

 Hi Michael,

 I've had plans for a long time to write a different implementation of SDN 
 that is runs on top of a Cypher based OGM which then uses a Cypher 
 connector (like the Neo4j-JDBC driver) to talk t an embedded or remote 
 Neo4j database.

 Unfortunately I haven't yet found the time to address that.

 Am 11.07.2014 um 01:23 schrieb Michael Azerhad michael...@gmail.com 
 javascript::

 Hi,

 I well know that SDN is fully optimized for embedded database. 


 I would like to know  if there are any plan (or maybe already done?) to 
 improve the way SDN manages @Fetch requests on lazy collections when using 
 REST mode.
 Indeed, some use cases are very slow with it.

 Right, that's why I currently recommend to move the SDN part to the server 
 as an unmanaged extension.


 A good workaround would be to write a pure Cypher query (@Query on a 
 repo's method)   to fetch for lazy collections. 

 The problem is still that the remote representation is flaky and not well 
 suited so you have to do multiple request to get all meta-data.

 However, this would break entity encapsulation, since I don't want to 
 populate the list inside the Entity = an entity is a simple POJO not aware 
 about repositories. 

 I think the repo should probably use the query to populate a simple DTO 
 class (annotated with @QueryResult) which then can be used in the front-end.

 Michael


 So it's not a pure technical question, but a curiosity :)

 Thanks a lot,

 Michael

 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+un...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] An efficient way of getting the count of friends?

2014-07-14 Thread Michael Hunger
Try:

 START src=node(%d)
 MATCH src-[:KNOWS]-(friend)
MATCH (friend)-[KNOWS]-foaf
WITH distinct foaf,src // reduce the cardinality
MATCH (foaf)-[:KNOWS*..2]-(dest)
 
 WHERE src  dest AND NOT(src-[:KNOWS]-dest)
 RETURN count(DISTINCT id(dest)) as count_friends

Am 07.07.2014 um 15:09 schrieb Frandro kimbk.mob...@gmail.com:

 I need to get the count of friends between 2 and 4 depths for all nodes with 
 the following query.
 
 String query = String.format(START src=node(%d) MATCH src-[:KNOWS*2..4]-dest 
 WHERE NOT(src-[:KNOWS*0..1]-dest) RETURN count(DISTINCT id(dest)) as 
 count_friends, nodeId);
 
 However, the problem is that traversing all the nodes and performing the 
 query are quite heavy.
 
 Is there a better way to mitigate the heavily loading work? 
 
 Is there a one query to get the result table containing 2 rows, a node and 
 count of friends as an alternative?
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Neo4j RPM / yum repository?

2014-07-14 Thread Wes Freeman
Maybe because it has the openjdk dep (which is now supported officially as
of 2.1, but previously wasn't supported for production)... I don't see why
it's not suitable, other than you probably should tweak the config files
for your use case. That's just my opinion.

Wes

On Mon, Jul 14, 2014 at 6:07 PM, Alan Robertson al...@unix.sh wrote:

  It says it's not suitable for production work.  Is that still true?



 On 07/14/2014 02:29 PM, Wes Freeman wrote:

 This one:
 http://yum.neo4j.org/

 On Mon, Jul 14, 2014 at 4:25 PM, Alan Robertson al...@unix.sh wrote:

 You've had .deb packages of Neo4j for a long time.

 Do you have a YUM (RPM) repository for Neo4j?

 --
 Alan Robertson al...@unix.sh al...@unix.sh - @OSSAlanR

 Openness is the foundation and preservative of friendship...  Let me
 claim from you at all times your undisguised opinions. - William
 Wilberforce

 --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.



 --
 Alan Robertson al...@unix.sh al...@unix.sh - @OSSAlanR

 Openness is the foundation and preservative of friendship...  Let me claim 
 from you at all times your undisguised opinions. - William Wilberforce

  --
 You received this message because you are subscribed to the Google Groups
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Full Unistall Neo4j

2014-07-14 Thread Michael Hunger
Please use --force



Sent from mobile device

Am 02.07.2014 um 16:25 schrieb Sotiris Beis sot.b...@gmail.com:

 I am trying to unistall Neo4j with
 apt-get remove neo4j 
 apt-get purge neo4j
 but I get the follwing error:
  invoke-rc.d: unknown initscript, /etc/init.d/neo4j-service not found.
 dpkg: error processing neo4j (--remove):
  subprocess installed pre-removal script returned error exit status 100
 Errors were encountered while processing:
  neo4j
 E: Sub-process /usr/bin/dpkg returned an error code (1)
 
 Have I done something wrong?
 
 Thanks,
 Sotiris 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Neo4j group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to neo4j+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Neo4j group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.