[Neo4j] Re: LOAD CSV takes over an hour

2014-06-17 Thread Pavan Kumar
Hi,
I have deployed neo4j 2.1.0-M01 on windows which has 8GB RAM. I am trying 
to import CSV file which has 3 records. I am using USING PERIODIC COMMIT 
1000 
LOAD CSV command for importing, but it gives unknown error. I have modified 
neo4j.properties file as adviced in the blogs. My neo4j.properties now 
looks like 
# Default values for the low-level graph engine

neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=4G
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=500M
neostore.propertystore.db.arrays.mapped_memory=500M

# Enable this to be able to upgrade a store from an older version
allow_store_upgrade=true

# Enable this to specify a parser other than the default one.
#cypher_parser_version=2.0

# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values 
such
# as: "7 days" or "100M size" instead of "true"
keep_logical_logs=true

# Autoindexing

# Enable auto-indexing for nodes, default is false
node_auto_indexing=true

# The node property keys to be auto-indexed, if enabled
#node_keys_indexable=name,age

# Enable auto-indexing for relationships, default is false
relationship_auto_indexing=true

# The relationship property keys to be auto-indexed, if enabled
#relationship_keys_indexable=name,age

# Setting for Community Edition:
cache_type=weak

Still i am facing the same problem. Is there any other file to change 
properties. Kindly help me in this issue.
Thanks in advance

On Tuesday, 4 March 2014 21:24:03 UTC+5:30, Aram Chung wrote:
>
> Hi,
>
> I was asked to post this here by Mark Needham (@markhneedham) who thought 
> my query took longer than it should.
>
> I'm trying to see how graph databases could be used in investigative 
> journalism: I was loading in New York State's Active Corporations: 
> Beginning 1800 data from 
> https://data.ny.gov/Economic-Development/Active-Corporations-Beginning-1800/n9v6-gdp6
>  
> as a 1964486-row csv (and deleted all U+F8FF characters, because I was 
> getting "[null] is not a supported property value"). The Cypher query I 
> used was 
>
> USING PERIODIC COMMIT 500
> LOAD CSV
>   FROM 
> "file://path/to/csv/Active_Corporations___Beginning_1800__without_header__wonky_characters_fixed.csv"
>   AS company
> CREATE (:DataActiveCorporations
> {
> DOS_ID:company[0],
> Current_Entity_Name:company[1],
> Initial_DOS_Filing_Date:company[2],
> County:company[3],
> Jurisdiction:company[4],
> Entity_Type:company[5],
>
> DOS_Process_Name:company[6],
> DOS_Process_Address_1:company[7],
> DOS_Process_Address_2:company[8],
> DOS_Process_City:company[9],
> DOS_Process_State:company[10],
> DOS_Process_Zip:company[11],
>
> CEO_Name:company[12],
> CEO_Address_1:company[13],
> CEO_Address_2:company[14],
> CEO_City:company[15],
> CEO_State:company[16],
> CEO_Zip:company[17],
>
> Registered_Agent_Name:company[18],
> Registered_Agent_Address_1:company[19],
> Registered_Agent_Address_2:company[20],
> Registered_Agent_City:company[21],
> Registered_Agent_State:company[22],
> Registered_Agent_Zip:company[23],
>
> Location_Name:company[24],
> Location_Address_1:company[25],
> Location_Address_2:company[26],
> Location_City:company[27],
> Location_State:company[28],
> Location_Zip:company[29]
> }
> );
>
> Each row is one node so it's as close to the raw data as possible. The 
> idea is loosely that these nodes will be linked with new nodes representing 
> people and addresses verified by reporters.
>
> This is what I got:
>
> +---+
> | No data returned. |
> +---+
> Nodes created: 1964486
> Properties set: 58934580
> Labels added: 1964486
> 4550855 ms
>
> Some context information: 
> Neo4j Milestone Release 2.1.0-M01
> Windows 7
> java version "1.7.0_03"
>
> Best,
> Aram
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Upgrading neo4j 1.9.3 to 2.0.3 fails with Invalid log format version found, expected 3 but was 2

2014-06-17 Thread sunyulovetech
I also encountered this problem.My solution is that:
  1.put the data import single neo4j.(one instance)
  2.update some data(any one can)
  3.start the ha cluster.
  Note:
First startup the neo4j instance that import the data

在 2014年6月17日星期二UTC+8下午3时59分32秒,Mamta Thakur写道:
>
> Hi,
>
> I have been trying to upgrade neo4j from 1.9.3 to 2.0.3. SDN 
> from 2.3.1.RELEASE to 3.1.0.RELEASE.
>
> Followed the steps listed @ 
> http://docs.neo4j.org/chunked/stable/deployment-upgrading.html#explicit-upgrade
> I try bringing up the server with the upgrade configuration.There are a 
> few new folders created in the db store. One among which is upgrade_backup 
> and messages log there says upgrade happened.
>
> 2014-06-17 07:16:55.286+ INFO  [o.n.k.i.DiagnosticsManager]: --- 
> INITIALIZED diagnostics END ---
> 2014-06-17 07:17:00.216+ INFO  [o.n.k.i.n.s.StoreFactory]: Starting 
> upgrade of database store files
> 2014-06-17 07:17:00.225+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 10% complete
> 2014-06-17 07:17:00.228+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 20% complete
> 2014-06-17 07:17:00.231+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 30% complete
> 2014-06-17 07:17:00.233+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 40% complete
> 2014-06-17 07:17:00.236+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 50% complete
> 2014-06-17 07:17:00.239+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 60% complete
> 2014-06-17 07:17:00.241+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 70% complete
> 2014-06-17 07:17:00.244+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 80% complete
> 2014-06-17 07:17:00.247+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 90% complete
> 2014-06-17 07:17:00.249+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 100% complete
> 2014-06-17 07:17:03.776+ INFO  [o.n.k.i.n.s.StoreFactory]: Finished 
> upgrade of database store files
>
> But I get the error with log/index.
>
> Exception when stopping 
> org.neo4j.index.impl.lucene.LuceneDataSource@42a792f0 
> org.neo4j.kernel.impl.tran
> saction.xaframework.IllegalLogFormatException: Invalid log format version 
> found, expected 3 but was 2. To be able to upgrade from an older log format 
> version there must have been a clean shutdown of the database
> java.lang.RuntimeException: 
> org.neo4j.kernel.impl.transaction.xaframework.IllegalLogFormatException: 
> Invalid log format version found, expected 3 but wa
> s 2. To be able to upgrade from an older log format version there must 
> have been a clean shutdown of the database
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy$1.reached(LogPruneStrategies.java:250)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$AbstractPruneStrategy.prune(LogPruneStrategies.java:78)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy.prune(LogPruneStrategies.java:222)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.close(XaLogicalLog.java:742)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogBackedXaDataSource.stop(LogBackedXaDataSource.java:69)
> at 
> org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:310)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
> at 
> org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
> at 
> org.neo4j.index.lucene.LuceneKernelExtension.stop(LuceneKernelExtension.java:92)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at 
> org.neo4j.kernel.extension.KernelExtensions.stop(KernelExtensions.java:124)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
> at 
> org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDatabase.java:801)
> at 
> org.springframework.data.neo4j.support.DelegatingGraphDatabase.shutdown(DelegatingGraphDatabase.java:270)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at 
> org.springframework.b

Re: [Neo4j] Large scale network analysis - best strategy?

2014-06-17 Thread Nigel Small
Hi Gareth

As you identify, there are certainly some differences in terms of
performance and feature set that you get when working with Neo4j under
different programming languages. Depending on your background, constraints
and integration needs, you could consider a hybrid approach whereby you
continue working with Python for your main application and build anything
that requires serious performance as a server extension in Java. Neo4j
plugin support is pretty comprehensive: for example, my server extension
load2neo  provides a facility to bulk load
data but also has direct support from my Python driver, py2neo
. This approach is somewhat analogous to compiling a C
extension in Python and could be done as an optimisation step once you have
built your end-to-end application logic.

Bear in mind also that Cypher is very powerful these days. It would
certainly be worth exploring some of its more recent capabilities before
choosing an architectural path as you may find there is little that cannot
already be achieved purely with Cypher. If this is the case, your choice of
application language could then become far less critical.

I'd suggest beginning with a prototype in a language you are comfortable
with. Then, build a suite of queries you need to run and ascertain the
bottlenecks or missing features. Once you have a list of these, you can
then make an informed decision on which pieces to optimise.

Kind regards
Nigel


On 17 June 2014 15:42, Shongololo  wrote:

> I am preparing a Neo4j database on which I would like to do some network
> analysis. It is a representation of a weakly connected and static physical
> system, and will have in the region of 50 million nodes where, lets say,
> about 50 nodes will connect to a parent node, which in turn is linked
> (think streets and intersections) to a network of other parent nodes.
>
> For most of the analysis, I will be using a weighted distance decay, so
> analysis of things like "betweenness" or "centrality" will be computed for
> the parent node network, but only to a limited extent. So, for example, if
> (a)--(b)--(c)--(d)--(e), then the computation will only be based up to,
> say, two steps away. So (a) will consider (b) and (c), whereas (c) will
> consider two steps in either direction.
>
> My question is a conceptual and strategic one: What is the best approach
> for doing this kind of analysis with neo4j?
>
> I currently work with Python, but it appears that for speed, flexibility,
> and use of complex graph algorithms, I am better off working with the
> embedded Java API for direct and powerful access to the graph? Or is an
> approach using something like bulb flow with gremlin also feasible? How
> does the power and flexibility of the different embedded tools compare -
> e.g. Python embedded vs. Java vs. Node.js?
>
> Thanks.
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Re: Neo4j database ALWAYS shuts down incorrectly if start/stop as a service from a list of windows services

2014-06-17 Thread Jim Salmons
Hi Denys,

I think you're experience is a variation of mine as related here 
. Seems to 
be a long-standing issue that hasn't bitten too many but is problematic for 
the 2.1 database migration. There are some tips/insight on the link. The 
Neo folks are aware of the issue so I expect we'll see a fix at some point, 
maybe soon.

Have you tried the "Ole Out and In" -- dump it from your 'corrupt' DB and 
load it in an empty 2.1? I had a number of tiny to small research and 
self-learning DBs that it was so much easier to go out and back in on a 
fresh 2.1 store.

That tip about timeouts might help you, too. And the Neo4j Mojo that I am 
not qualified to comment on and I suspect is a big factor has to do with 
leveraging 2.0+ version indexing and constraints, etc. There might be some 
tweaks to the schema that you can make before migrating that, along with 
the increased timeout value, might give the migration process the room to 
work its one-time-only procedure.

It's a painful problem but, as you can imagine, hits a relative small 
segment of the broader Neo4j community. BTW, if you run Neo4j as a Windows 
Service, have you tried my mini Control Panel? 
:-) http://jim-salmons.github.io/neo4jcp/

I've subscribed to this thread and will let you know if I learn anything 
more, etc.

--Jim--

www.FactMiners.org and www.SoftalkApple.com

On Tuesday, June 17, 2014 4:50:09 AM UTC-5, Denys Hryvastov wrote:
>
> Here is a stack trace that I get when I try to do upgrade from 1.9.5 to 
> 2.0:
>
> 2014-06-17 09:48:27.319+ INFO  [API] Setting startup timeout to: 
> 12ms based on -1
> Detected incorrectly shut down database, performing recovery..
> 2014-06-17 09:48:28.108+ DEBUG [API]
> org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: 
> Error starting org.neo4j.kernel.EmbeddedGraphDatabase, 
> D:\Neo4j\neo4j-enterprise-2.0.0\data\graph.db
> at 
> org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:209) 
> ~[neo4j-server-2.0.0.jar:2.0.0]
> at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:87) 
> [neo4j-server-2.0.0.jar:2.0.0]
> at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:50) 
> [neo4j-server-2.0.0.jar:2.0.0]
> Caused by: java.lang.RuntimeException: Error starting 
> org.neo4j.kernel.EmbeddedGraphDatabase, 
> D:\Neo4j\neo4j-enterprise-2.0.0\data\graph.db
> at 
> org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:333)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.kernel.EmbeddedGraphDatabase.(EmbeddedGraphDatabase.java:63) 
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:92)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:198)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.kernel.impl.recovery.StoreRecoverer.recover(StoreRecoverer.java:115)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.server.preflight.PerformRecoveryIfNecessary.run(PerformRecoveryIfNecessary.java:59)
>  
> ~[neo4j-server-2.0.0.jar:2.0.0]
> at 
> org.neo4j.server.preflight.PreFlightTasks.run(PreFlightTasks.java:70) 
> ~[neo4j-server-2.0.0.jar:2.0.0]
> at 
> org.neo4j.server.AbstractNeoServer.runPreflightTasks(AbstractNeoServer.java:319)
>  
> ~[neo4j-server-2.0.0.jar:2.0.0]
> at 
> org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:144) 
> ~[neo4j-server-2.0.0.jar:2.0.0]
> ... 2 common frames omitted
> Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 
> 'org.neo4j.kernel.impl.transaction.XaDataSourceManager@2b1eb67d' was 
> successfully initialized, but failed to start.
> Please see attached cause exception.
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:504)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115) 
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:310)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> ... 10 common frames omitted
> Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 
> 'org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource@1bf5df6a' was 
> successfully initialized, but failed to start. P
> lease see attached cause exception.
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:504)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115) 
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
> at 
> org.neo4j.kernel.impl.transaction.XaDataSourceManager.start(XaDataSourceManager.java:164)
>  
> ~[neo4j-kernel-2.0.0.jar:2.0.0]
>   

Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Michael Hunger
The something is really wrong.

What happens if you do

>  
>  LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
Limit 100
>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
> toInt(c.CityId)})
Return count(*)

I'm at a conference in Amsterdam this week
but perhaps we can do a skype call next week?

Michael



Sent from mobile device

Am 17.06.2014 um 18:48 schrieb Paul Damian :

> Yes, I do. I keep getting Java heap space error now. I'm using 100 commit 
> size.
> 
> marți, 17 iunie 2014, 19:28:05 UTC+3, Michael Hunger a scris:
>> 
>> Ok, cool and you have the indexes for both :City(Id) and :Client(Id) ?
>> 
>> 
>> Michael
>> 
>> Am 17.06.2014 um 18:15 schrieb Paul Damian :
>> 
>>> The first query returns 96 which is the number of rows in the file and 
>>> the second one returns Neo.DatabaseError.Statement.ExecutionFailure
>>>  probably because of the null values. But then I run the following command:
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>>  MATCH (city:City { Id: toInt(c.CityId)})
>>> WHERE coalesce(c.CityId,"") <> ""
>>> RETURN count(*)
>>> 
>>> and I get 992980
>>> 
>>> 
>>> marți, 17 iunie 2014, 17:55:56 UTC+3, Michael Hunger a scris:
 No you can just filter out the lines with no cityid
 
 Did you run my suggested commands?
 
> LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>  MATCH (client: Client { Id: toInt(c.Id)})
 RETURN count(*)
 
> LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>  MATCH (city: City { Id: toInt(c.CityId)})
 RETURN count(*)
 
> 
 
> LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
 return c
 limit 10
 
 
 Am 17.06.2014 um 16:37 schrieb Paul Damian :
 
> in the file I only have 2 columns, one for client id, which is always not 
> null and CityId, which may be sometimes null. Should I export the records 
> from SQL database leaving out the columns that contain null values?
> 
> marți, 17 iunie 2014, 15:39:14 UTC+3, Michael Hunger a scris:
>> 
>> if they don't have a value for city id, do they then have empty columns 
>> there still? like "user-id,,
>> 
>> You probably want to filter these rows?
>> 
> LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>> WHERE coalesce(c.CitiId,"") <> ""
>> ...
>> 
>> Am 17.06.2014 um 11:23 schrieb Paul Damian :
>> 
>>> Well, the csv file contains some rows that do not have a value for 
>>> CityId, and the rows are unique regarding the clientID. There are 11M 
>>> clients living in 14K Cities. Is there a limit of links/node?
>>> Now I've created a piece of code that reads from file and creates each 
>>> relationship, but, as you can imagine, it works really slow in this 
>>> scenario.
>>>  
 did you create an index on :Client(Id) and :City(Id)
 
 what happens if you do:
 
> LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>  MATCH (client: Client { Id: toInt(c.Id)})
 RETURN count(*)
 
> LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>  MATCH (city: City { Id: toInt(c.CityId)})
 RETURN count(*)
 
 each count should be equivalent to the # of rows in the file.
 
 Michael
 
 Am 16.06.2014 um 17:47 schrieb Paul Damian :
 
> Somehow I've managed to load all the nodes and now I'm trying to load 
> the links as well. I read the nodes from csv file and create the 
> relation between them. I run the following command:
> USING PERIODIC COMMIT 100 
>  LOAD CSV WITH HEADERS FROM 
> "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
> toInt(c.CityId)})
>  CREATE (client)-[r:LOCATED_IN]->(city)
> 
> Running with a smaller commit size returns this error 
> Neo.DatabaseError.Statement.ExecutionFailure, while increasing the 
> commit size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
> Can you help me with this?
> 
> 
> joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
>> 
>> Perhaps something with field or line terminators?
>> 
>> I assume it blows up the field separation.
>> 
>> Try to run:
>> 
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" 
>> AS c
>> RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: 
>> c.Las

Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Paul Damian
Yes, I do. I keep getting Java heap space error now. I'm using 100 commit 
size.

marți, 17 iunie 2014, 19:28:05 UTC+3, Michael Hunger a scris:
>
> Ok, cool and you have the indexes for both :City(Id) and :Client(Id) ?
>
>
> Michael
>
> Am 17.06.2014 um 18:15 schrieb Paul Damian  >:
>
> The first query returns 96 which is the number of rows in the file and 
> the second one returns Neo.DatabaseError.Statement.ExecutionFailure
>  probably because of the null values. But then I run the following command:
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
> c
>  MATCH (city:City { Id: toInt(c.CityId)})
> WHERE coalesce(c.CityId,"") <> ""
> RETURN count(*)
>
> and I get 992980
>
>
> marți, 17 iunie 2014, 17:55:56 UTC+3, Michael Hunger a scris:
>
>> No you can just filter out the lines with no cityid
>>
>> Did you run my suggested commands?
>>
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c
  MATCH (client: Client { Id: toInt(c.Id)})

 RETURN count(*)

 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c
  MATCH (city: City { Id: toInt(c.CityId)})

 RETURN count(*)

>>>

>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c

 return c
>> limit 10
>>
>>
 Am 17.06.2014 um 16:37 schrieb Paul Damian :
>>
>> in the file I only have 2 columns, one for client id, which is always not 
>> null and CityId, which may be sometimes null. Should I export the records 
>> from SQL database leaving out the columns that contain null values?
>>
>> marți, 17 iunie 2014, 15:39:14 UTC+3, Michael Hunger a scris:
>>>
>>> if they don't have a value for city id, do they then have empty columns 
>>> there still? like "user-id,,
>>>
>>> You probably want to filter these rows?
>>>
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c

 WHERE coalesce(c.CitiId,"") <> ""
>>> ...
>>>
>>> Am 17.06.2014 um 11:23 schrieb Paul Damian :
>>>
>>> Well, the csv file contains some rows that do not have a value for 
>>> CityId, and the rows are unique regarding the clientID. There are 11M 
>>> clients living in 14K Cities. Is there a limit of links/node?
>>> Now I've created a piece of code that reads from file and creates each 
>>> relationship, but, as you can imagine, it works really slow in this 
>>> scenario.
>>>  
>>>
 did you create an index on :Client(Id) and :City(Id)

 what happens if you do:

 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c
  MATCH (client: Client { Id: toInt(c.Id)})

 RETURN count(*)

 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c
  MATCH (city: City { Id: toInt(c.CityId)})

 RETURN count(*)

 each count should be equivalent to the # of rows in the file.

 Michael

 Am 16.06.2014 um 17:47 schrieb Paul Damian :

 Somehow I've managed to load all the nodes and now I'm trying to load 
 the links as well. I read the nodes from csv file and create the relation 
 between them. I run the following command:
 USING PERIODIC COMMIT 100 
  LOAD CSV WITH HEADERS FROM 
 "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
 toInt(c.CityId)})
  CREATE (client)-[r:LOCATED_IN]->(city)

 Running with a smaller commit size returns this error 
 Neo.DatabaseError.Statement.ExecutionFailure, while increasing the 
 commit size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
 Can you help me with this?


 joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
>
> Perhaps something with field or line terminators?
>
> I assume it blows up the field separation.
>
> Try to run:
>
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" 
> AS c
> RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: 
> c.Lastname, Address: c.Address, ZipCode: toInt(c.ZipCode), Email: 
> c.Email, 
> Phone: c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, 
> Latitude: toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
> LIMIT 3
>
>
>
> On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  
> wrote:
>
>> I've tried using the shell and I get the same results: nodes with no 
>> properties.
>> I've created the csv file using MsSQL Server Export. Is it relevant?
>>
>> About you curiosity: I figured I would import first the nodes, then 
>> the relationships from the connection tables. Am I doing it wrong?
>>
>> Thanks
>>
>> joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:
>>>
>>> I'd probably use a commit size in 

Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Michael Hunger
Ok, cool and you have the indexes for both :City(Id) and :Client(Id) ?


Michael

Am 17.06.2014 um 18:15 schrieb Paul Damian :

> The first query returns 96 which is the number of rows in the file and 
> the second one returns Neo.DatabaseError.Statement.ExecutionFailure
>  probably because of the null values. But then I run the following command:
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>  MATCH (city:City { Id: toInt(c.CityId)})
> WHERE coalesce(c.CityId,"") <> ""
> RETURN count(*)
> 
> and I get 992980
> 
> 
> marți, 17 iunie 2014, 17:55:56 UTC+3, Michael Hunger a scris:
> No you can just filter out the lines with no cityid
> 
> Did you run my suggested commands?
> 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
 c
  MATCH (client: Client { Id: toInt(c.Id)})
>>> RETURN count(*)
>>> 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
 c
  MATCH (city: City { Id: toInt(c.CityId)})
>>> RETURN count(*)
> 
>> 
> 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
 c
> return c
> limit 10
> 
>>> 
> 
> Am 17.06.2014 um 16:37 schrieb Paul Damian :
> 
>> in the file I only have 2 columns, one for client id, which is always not 
>> null and CityId, which may be sometimes null. Should I export the records 
>> from SQL database leaving out the columns that contain null values?
>> 
>> marți, 17 iunie 2014, 15:39:14 UTC+3, Michael Hunger a scris:
>> if they don't have a value for city id, do they then have empty columns 
>> there still? like "user-id,,
>> 
>> You probably want to filter these rows?
>> 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
 c
>> WHERE coalesce(c.CitiId,"") <> ""
>> ...
>> 
>> Am 17.06.2014 um 11:23 schrieb Paul Damian :
>> 
>>> Well, the csv file contains some rows that do not have a value for CityId, 
>>> and the rows are unique regarding the clientID. There are 11M clients 
>>> living in 14K Cities. Is there a limit of links/node?
>>> Now I've created a piece of code that reads from file and creates each 
>>> relationship, but, as you can imagine, it works really slow in this 
>>> scenario.
>>>  
>>> did you create an index on :Client(Id) and :City(Id)
>>> 
>>> what happens if you do:
>>> 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
 c
  MATCH (client: Client { Id: toInt(c.Id)})
>>> RETURN count(*)
>>> 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
 c
  MATCH (city: City { Id: toInt(c.CityId)})
>>> RETURN count(*)
>>> 
>>> each count should be equivalent to the # of rows in the file.
>>> 
>>> Michael
>>> 
>>> Am 16.06.2014 um 17:47 schrieb Paul Damian :
>>> 
 Somehow I've managed to load all the nodes and now I'm trying to load the 
 links as well. I read the nodes from csv file and create the relation 
 between them. I run the following command:
 USING PERIODIC COMMIT 100 
  LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
 AS c
  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
 toInt(c.CityId)})
  CREATE (client)-[r:LOCATED_IN]->(city)
 
 Running with a smaller commit size returns this error 
 Neo.DatabaseError.Statement.ExecutionFailure, while increasing the commit 
 size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
 Can you help me with this?
 
 
 joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
 Perhaps something with field or line terminators?
 
 I assume it blows up the field separation.
 
 Try to run:
 
 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS c
 RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: c.Lastname, 
 Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, Phone: 
 c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, Latitude: 
 toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
 toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
 LIMIT 3
 
 
 
 On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  wrote:
 I've tried using the shell and I get the same results: nodes with no 
 properties.
 I've created the csv file using MsSQL Server Export. Is it relevant?
 
 About you curiosity: I figured I would import first the nodes, then the 
 relationships from the connection tables. Am I doing it wrong?
 
 Thanks
 
 joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:
 I'd probably use a commit size in your case of 50k or 100k.
 
 Try to use the neo4j-shell and not the web-interface.
 
 Connect to neo4j using bin/neo4j-shell
 
 Then run your commands ending with a semicolon.
 
 Just curious: Your data is imported as one node per row? That's not really 

[Neo4j] Large scale network analysis - best strategy?

2014-06-17 Thread Shongololo
I am preparing a Neo4j database on which I would like to do some network 
analysis. It is a representation of a weakly connected and static physical 
system, and will have in the region of 50 million nodes where, lets say, 
about 50 nodes will connect to a parent node, which in turn is linked 
(think streets and intersections) to a network of other parent nodes.

For most of the analysis, I will be using a weighted distance decay, so 
analysis of things like "betweenness" or "centrality" will be computed for 
the parent node network, but only to a limited extent. So, for example, if 
(a)--(b)--(c)--(d)--(e), then the computation will only be based up to, 
say, two steps away. So (a) will consider (b) and (c), whereas (c) will 
consider two steps in either direction.

My question is a conceptual and strategic one: What is the best approach 
for doing this kind of analysis with neo4j?

I currently work with Python, but it appears that for speed, flexibility, 
and use of complex graph algorithms, I am better off working with the 
embedded Java API for direct and powerful access to the graph? Or is an 
approach using something like bulb flow with gremlin also feasible? How 
does the power and flexibility of the different embedded tools compare - 
e.g. Python embedded vs. Java vs. Node.js?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Paul Damian
The first query returns 96 which is the number of rows in the file and 
the second one returns Neo.DatabaseError.Statement.ExecutionFailure
 probably because of the null values. But then I run the following command:
LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
 MATCH (city:City { Id: toInt(c.CityId)})
WHERE coalesce(c.CityId,"") <> ""
RETURN count(*)

and I get 992980


marți, 17 iunie 2014, 17:55:56 UTC+3, Michael Hunger a scris:

> No you can just filter out the lines with no cityid
>
> Did you run my suggested commands?
>
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
>>> c
>>>  MATCH (client: Client { Id: toInt(c.Id)})
>>>
>>> RETURN count(*)
>>>
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>>> AS c
>>>  MATCH (city: City { Id: toInt(c.CityId)})
>>>
>>> RETURN count(*)
>>>
>>
>>>
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>>> AS c
>>>
>>> return c
> limit 10
>
>
>>> Am 17.06.2014 um 16:37 schrieb Paul Damian  >:
>
> in the file I only have 2 columns, one for client id, which is always not 
> null and CityId, which may be sometimes null. Should I export the records 
> from SQL database leaving out the columns that contain null values?
>
> marți, 17 iunie 2014, 15:39:14 UTC+3, Michael Hunger a scris:
>>
>> if they don't have a value for city id, do they then have empty columns 
>> there still? like "user-id,,
>>
>> You probably want to filter these rows?
>>
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>>> AS c
>>>
>>> WHERE coalesce(c.CitiId,"") <> ""
>> ...
>>
>> Am 17.06.2014 um 11:23 schrieb Paul Damian :
>>
>> Well, the csv file contains some rows that do not have a value for 
>> CityId, and the rows are unique regarding the clientID. There are 11M 
>> clients living in 14K Cities. Is there a limit of links/node?
>> Now I've created a piece of code that reads from file and creates each 
>> relationship, but, as you can imagine, it works really slow in this 
>> scenario.
>>  
>>
>>> did you create an index on :Client(Id) and :City(Id)
>>>
>>> what happens if you do:
>>>
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>>> AS c
>>>  MATCH (client: Client { Id: toInt(c.Id)})
>>>
>>> RETURN count(*)
>>>
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>>> AS c
>>>  MATCH (city: City { Id: toInt(c.CityId)})
>>>
>>> RETURN count(*)
>>>
>>> each count should be equivalent to the # of rows in the file.
>>>
>>> Michael
>>>
>>> Am 16.06.2014 um 17:47 schrieb Paul Damian :
>>>
>>> Somehow I've managed to load all the nodes and now I'm trying to load 
>>> the links as well. I read the nodes from csv file and create the relation 
>>> between them. I run the following command:
>>> USING PERIODIC COMMIT 100 
>>>  LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>>> AS c
>>>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
>>> toInt(c.CityId)})
>>>  CREATE (client)-[r:LOCATED_IN]->(city)
>>>
>>> Running with a smaller commit size returns this error 
>>> Neo.DatabaseError.Statement.ExecutionFailure, while increasing the 
>>> commit size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
>>> Can you help me with this?
>>>
>>>
>>> joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:

 Perhaps something with field or line terminators?

 I assume it blows up the field separation.

 Try to run:

 LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS 
 c
 RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: c.Lastname, 
 Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, Phone: 
 c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, Latitude: 
 toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
 toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
 LIMIT 3



 On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  
 wrote:

> I've tried using the shell and I get the same results: nodes with no 
> properties.
> I've created the csv file using MsSQL Server Export. Is it relevant?
>
> About you curiosity: I figured I would import first the nodes, then 
> the relationships from the connection tables. Am I doing it wrong?
>
> Thanks
>
> joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:
>>
>> I'd probably use a commit size in your case of 50k or 100k.
>>
>> Try to use the neo4j-shell and not the web-interface.
>>
>> Connect to neo4j using bin/neo4j-shell
>>
>> Then run your commands ending with a semicolon.
>>
>> Just curious: Your data is imported as one node per row? That's not 
>> really a graph structure.
>>
>>
>>
>>
>> On Wed, Jun 4, 2014 at 6:56 PM, Paul Damian  
>> wrote:
>>
>>> Hi ther

Re: [Neo4j] Upgrading neo4j 1.9.3 to 2.0.3 fails with Invalid log format version found, expected 3 but was 2

2014-06-17 Thread Michael Hunger
No, as you have a clean shutdown all the data is in the store.

Sent from mobile device

Am 17.06.2014 um 16:55 schrieb Mamta Thakur :

> Hi Michael,
> 
> Does that mean I will loose the existing index as well? How would any of my 
> queries work?
> 
> I fixed some of the @RelationshipEntity that did not have an @GraphId 
> required with 2.0.3.
> Now I don't get this error on start up rather I get this error when shutting 
> down the server.
> 
> ~Mamta.
> 
> On Tuesday, June 17, 2014 7:36:10 PM UTC+5:30, Michael Hunger wrote:
>> 
>> Btw. just got the info that it is fixed and will be part of 2.0.4
>> 
>> https://github.com/neo4j/neo4j/commit/37371aa (Thanks Jake!)
>> 
>> Michael
>> Am 17.06.2014 um 14:55 schrieb Michael Hunger :
>> 
>>> This is a know issue which is currently worked on,
>>> 
>>> can you delete the logical log files of the lucene index after your upgrade?
>>> 
>>> that means
>>> 
>>> rm graph.db/index/lucene.log.*
>>> 
>>> and you _might_ need to create a transaction against an index, like creating
>>> an index and deleting it again, e.g. from java code or the shell.
>>> 
>>> db.index().forNodex("foo").delete()
>>> 
>>> 
>>> 
>>> Thanks a lot
>>> 
>>> Michael
>>> 
>>> Am 17.06.2014 um 09:59 schrieb Mamta Thakur :
>>> 
 Hi,
 
 I have been trying to upgrade neo4j from 1.9.3 to 2.0.3. SDN from 
 2.3.1.RELEASE to 3.1.0.RELEASE.
 
 Followed the steps listed @ 
 http://docs.neo4j.org/chunked/stable/deployment-upgrading.html#explicit-upgrade
 I try bringing up the server with the upgrade configuration.There are a 
 few new folders created in the db store. One among which is upgrade_backup 
 and messages log there says upgrade happened.
 
 2014-06-17 07:16:55.286+ INFO  [o.n.k.i.DiagnosticsManager]: --- 
 INITIALIZED diagnostics END ---
 2014-06-17 07:17:00.216+ INFO  [o.n.k.i.n.s.StoreFactory]: Starting 
 upgrade of database store files
 2014-06-17 07:17:00.225+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 10% complete
 2014-06-17 07:17:00.228+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 20% complete
 2014-06-17 07:17:00.231+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 30% complete
 2014-06-17 07:17:00.233+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 40% complete
 2014-06-17 07:17:00.236+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 50% complete
 2014-06-17 07:17:00.239+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 60% complete
 2014-06-17 07:17:00.241+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 70% complete
 2014-06-17 07:17:00.244+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 80% complete
 2014-06-17 07:17:00.247+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 90% complete
 2014-06-17 07:17:00.249+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
 upgrade 100% complete
 2014-06-17 07:17:03.776+ INFO  [o.n.k.i.n.s.StoreFactory]: Finished 
 upgrade of database store files
 
 But I get the error with log/index.
 
 Exception when stopping 
 org.neo4j.index.impl.lucene.LuceneDataSource@42a792f0 
 org.neo4j.kernel.impl.tran
 saction.xaframework.IllegalLogFormatException: Invalid log format version 
 found, expected 3 but was 2. To be able to upgrade from an older log 
 format version there must have been a clean shutdown of the database
 java.lang.RuntimeException: 
 org.neo4j.kernel.impl.transaction.xaframework.IllegalLogFormatException: 
 Invalid log format version found, expected 3 but wa
 s 2. To be able to upgrade from an older log format version there must 
 have been a clean shutdown of the database
 at 
 org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy$1.reached(LogPruneStrategies.java:250)
 at 
 org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$AbstractPruneStrategy.prune(LogPruneStrategies.java:78)
 at 
 org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy.prune(LogPruneStrategies.java:222)
 at 
 org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.close(XaLogicalLog.java:742)
 at 
 org.neo4j.kernel.impl.transaction.xaframework.LogBackedXaDataSource.stop(LogBackedXaDataSource.java:69)
 at 
 org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:310)
 at 
 org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
 at 
 org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
 at 
 org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
 at 
 org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
>>

Re: [Neo4j] Upgrading neo4j 1.9.3 to 2.0.3 fails with Invalid log format version found, expected 3 but was 2

2014-06-17 Thread Mamta Thakur
Hi Michael,

Does that mean I will loose the existing index as well? How would any of my 
queries work?

I fixed some of the @RelationshipEntity that did not have an @GraphId 
required with 2.0.3.
Now I don't get this error on start up rather I get this error when 
shutting down the server.

~Mamta.

On Tuesday, June 17, 2014 7:36:10 PM UTC+5:30, Michael Hunger wrote:
>
> Btw. just got the info that it is fixed and will be part of 2.0.4
>
> https://github.com/neo4j/neo4j/commit/37371aa (Thanks Jake!)
>
> Michael
> Am 17.06.2014 um 14:55 schrieb Michael Hunger <
> michael...@neotechnology.com >:
>
> This is a know issue which is currently worked on,
>
> can you delete the logical log files of the lucene index after your 
> upgrade?
>
> that means
>
> rm graph.db/index/lucene.log.*
>
> and you _might_ need to create a transaction against an index, like 
> creating
> an index and deleting it again, e.g. from java code or the shell.
>
> db.index().forNodex("foo").delete()
>
>
>
> Thanks a lot
>
> Michael
>
> Am 17.06.2014 um 09:59 schrieb Mamta Thakur  >:
>
> Hi,
>
> I have been trying to upgrade neo4j from 1.9.3 to 2.0.3. SDN 
> from 2.3.1.RELEASE to 3.1.0.RELEASE.
>
> Followed the steps listed @ 
> http://docs.neo4j.org/chunked/stable/deployment-upgrading.html#explicit-upgrade
> I try bringing up the server with the upgrade configuration.There are a 
> few new folders created in the db store. One among which is upgrade_backup 
> and messages log there says upgrade happened.
>
> 2014-06-17 07:16:55.286+ INFO  [o.n.k.i.DiagnosticsManager]: --- 
> INITIALIZED diagnostics END ---
> 2014-06-17 07:17:00.216+ INFO  [o.n.k.i.n.s.StoreFactory]: Starting 
> upgrade of database store files
> 2014-06-17 07:17:00.225+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 10% complete
> 2014-06-17 07:17:00.228+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 20% complete
> 2014-06-17 07:17:00.231+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 30% complete
> 2014-06-17 07:17:00.233+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 40% complete
> 2014-06-17 07:17:00.236+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 50% complete
> 2014-06-17 07:17:00.239+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 60% complete
> 2014-06-17 07:17:00.241+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 70% complete
> 2014-06-17 07:17:00.244+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 80% complete
> 2014-06-17 07:17:00.247+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 90% complete
> 2014-06-17 07:17:00.249+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
> upgrade 100% complete
> 2014-06-17 07:17:03.776+ INFO  [o.n.k.i.n.s.StoreFactory]: Finished 
> upgrade of database store files
>
> But I get the error with log/index.
>
> Exception when stopping 
> org.neo4j.index.impl.lucene.LuceneDataSource@42a792f0 
> org.neo4j.kernel.impl.tran
> saction.xaframework.IllegalLogFormatException: Invalid log format version 
> found, expected 3 but was 2. To be able to upgrade from an older log format 
> version there must have been a clean shutdown of the database
> java.lang.RuntimeException: 
> org.neo4j.kernel.impl.transaction.xaframework.IllegalLogFormatException: 
> Invalid log format version found, expected 3 but wa
> s 2. To be able to upgrade from an older log format version there must 
> have been a clean shutdown of the database
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy$1.reached(LogPruneStrategies.java:250)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$AbstractPruneStrategy.prune(LogPruneStrategies.java:78)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy.prune(LogPruneStrategies.java:222)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.close(XaLogicalLog.java:742)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogBackedXaDataSource.stop(LogBackedXaDataSource.java:69)
> at 
> org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:310)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
> at 
> org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
> at 
> org.neo4j.index.lucene.LuceneKernelExtension.stop(LuceneKernelExtension.java:92)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at 
> org.neo4j.kernel.extension.KernelExtensions.stop(KernelExtensions.java:124)
> at 
> org.neo4j.kernel.l

Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Michael Hunger
No you can just filter out the lines with no cityid

Did you run my suggested commands?

>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>>  MATCH (client: Client { Id: toInt(c.Id)})
>> RETURN count(*)
>> 
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>>  MATCH (city: City { Id: toInt(c.CityId)})
>> RETURN count(*)

> 

>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
return c
limit 10

>> 

Am 17.06.2014 um 16:37 schrieb Paul Damian :

> in the file I only have 2 columns, one for client id, which is always not 
> null and CityId, which may be sometimes null. Should I export the records 
> from SQL database leaving out the columns that contain null values?
> 
> marți, 17 iunie 2014, 15:39:14 UTC+3, Michael Hunger a scris:
> if they don't have a value for city id, do they then have empty columns there 
> still? like "user-id,,
> 
> You probably want to filter these rows?
> 
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
> WHERE coalesce(c.CitiId,"") <> ""
> ...
> 
> Am 17.06.2014 um 11:23 schrieb Paul Damian :
> 
>> Well, the csv file contains some rows that do not have a value for CityId, 
>> and the rows are unique regarding the clientID. There are 11M clients living 
>> in 14K Cities. Is there a limit of links/node?
>> Now I've created a piece of code that reads from file and creates each 
>> relationship, but, as you can imagine, it works really slow in this scenario.
>>  
>> did you create an index on :Client(Id) and :City(Id)
>> 
>> what happens if you do:
>> 
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>>  MATCH (client: Client { Id: toInt(c.Id)})
>> RETURN count(*)
>> 
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>>  MATCH (city: City { Id: toInt(c.CityId)})
>> RETURN count(*)
>> 
>> each count should be equivalent to the # of rows in the file.
>> 
>> Michael
>> 
>> Am 16.06.2014 um 17:47 schrieb Paul Damian :
>> 
>>> Somehow I've managed to load all the nodes and now I'm trying to load the 
>>> links as well. I read the nodes from csv file and create the relation 
>>> between them. I run the following command:
>>> USING PERIODIC COMMIT 100 
>>>  LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
>>> c
>>>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
>>> toInt(c.CityId)})
>>>  CREATE (client)-[r:LOCATED_IN]->(city)
>>> 
>>> Running with a smaller commit size returns this error 
>>> Neo.DatabaseError.Statement.ExecutionFailure, while increasing the commit 
>>> size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
>>> Can you help me with this?
>>> 
>>> 
>>> joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
>>> Perhaps something with field or line terminators?
>>> 
>>> I assume it blows up the field separation.
>>> 
>>> Try to run:
>>> 
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS c
>>> RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: c.Lastname, 
>>> Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, Phone: 
>>> c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, Latitude: 
>>> toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
>>> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
>>> LIMIT 3
>>> 
>>> 
>>> 
>>> On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  wrote:
>>> I've tried using the shell and I get the same results: nodes with no 
>>> properties.
>>> I've created the csv file using MsSQL Server Export. Is it relevant?
>>> 
>>> About you curiosity: I figured I would import first the nodes, then the 
>>> relationships from the connection tables. Am I doing it wrong?
>>> 
>>> Thanks
>>> 
>>> joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:
>>> I'd probably use a commit size in your case of 50k or 100k.
>>> 
>>> Try to use the neo4j-shell and not the web-interface.
>>> 
>>> Connect to neo4j using bin/neo4j-shell
>>> 
>>> Then run your commands ending with a semicolon.
>>> 
>>> Just curious: Your data is imported as one node per row? That's not really 
>>> a graph structure.
>>> 
>>> 
>>> 
>>> 
>>> On Wed, Jun 4, 2014 at 6:56 PM, Paul Damian  wrote:
>>> Hi there,
>>> 
>>> I'm experimenting with Neo4j while benchmarking a bunch of NoSQL databases 
>>> for my graduation paper. 
>>> I'm using the web interface to populate the database. I've been able to 
>>> load the smaller tables from my SQL database and LOAD CSV works fine.
>>> By small, I mean a few columns (4-5) and some rows (1 million). However, 
>>> when I try to upload a larger table (15 columns, 12 million rows), it 
>>> creates the nodes but it doesn't set any properties.
>>> I've tried to reduce the number of records (to 100) and also the number of 
>>> columns( just the Id property ), but no luck so far.
>>> 
>>> The cypher command used is this one
>>> USING PERIODIC 

Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Paul Damian
in the file I only have 2 columns, one for client id, which is always not 
null and CityId, which may be sometimes null. Should I export the records 
from SQL database leaving out the columns that contain null values?

marți, 17 iunie 2014, 15:39:14 UTC+3, Michael Hunger a scris:
>
> if they don't have a value for city id, do they then have empty columns 
> there still? like "user-id,,
>
> You probably want to filter these rows?
>
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
>> c
>>
>> WHERE coalesce(c.CitiId,"") <> ""
> ...
>
> Am 17.06.2014 um 11:23 schrieb Paul Damian  >:
>
> Well, the csv file contains some rows that do not have a value for CityId, 
> and the rows are unique regarding the clientID. There are 11M clients 
> living in 14K Cities. Is there a limit of links/node?
> Now I've created a piece of code that reads from file and creates each 
> relationship, but, as you can imagine, it works really slow in this 
> scenario.
>  
>
>> did you create an index on :Client(Id) and :City(Id)
>>
>> what happens if you do:
>>
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>> AS c
>>  MATCH (client: Client { Id: toInt(c.Id)})
>>
>> RETURN count(*)
>>
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>> AS c
>>  MATCH (city: City { Id: toInt(c.CityId)})
>>
>> RETURN count(*)
>>
>> each count should be equivalent to the # of rows in the file.
>>
>> Michael
>>
>> Am 16.06.2014 um 17:47 schrieb Paul Damian :
>>
>> Somehow I've managed to load all the nodes and now I'm trying to load the 
>> links as well. I read the nodes from csv file and create the relation 
>> between them. I run the following command:
>> USING PERIODIC COMMIT 100 
>>  LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
>> AS c
>>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
>> toInt(c.CityId)})
>>  CREATE (client)-[r:LOCATED_IN]->(city)
>>
>> Running with a smaller commit size returns this error 
>> Neo.DatabaseError.Statement.ExecutionFailure, while increasing the 
>> commit size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
>> Can you help me with this?
>>
>>
>> joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
>>>
>>> Perhaps something with field or line terminators?
>>>
>>> I assume it blows up the field separation.
>>>
>>> Try to run:
>>>
>>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS c
>>> RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: c.Lastname, 
>>> Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, Phone: 
>>> c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, Latitude: 
>>> toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
>>> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
>>> LIMIT 3
>>>
>>>
>>>
>>> On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  
>>> wrote:
>>>
 I've tried using the shell and I get the same results: nodes with no 
 properties.
 I've created the csv file using MsSQL Server Export. Is it relevant?

 About you curiosity: I figured I would import first the nodes, then the 
 relationships from the connection tables. Am I doing it wrong?

 Thanks

 joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:
>
> I'd probably use a commit size in your case of 50k or 100k.
>
> Try to use the neo4j-shell and not the web-interface.
>
> Connect to neo4j using bin/neo4j-shell
>
> Then run your commands ending with a semicolon.
>
> Just curious: Your data is imported as one node per row? That's not 
> really a graph structure.
>
>
>
>
> On Wed, Jun 4, 2014 at 6:56 PM, Paul Damian  
> wrote:
>
>> Hi there,
>>
>> I'm experimenting with Neo4j while benchmarking a bunch of NoSQL 
>> databases for my graduation paper. 
>> I'm using the web interface to populate the database. I've been able 
>> to load the smaller tables from my SQL database and LOAD CSV works fine.
>> By small, I mean a few columns (4-5) and some rows (1 million). 
>> However, when I try to upload a larger table (15 columns, 12 million 
>> rows), 
>> it creates the nodes but it doesn't set any properties.
>> I've tried to reduce the number of records (to 100) and also the 
>> number of columns( just the Id property ), but no luck so far.
>>
>> The cypher command used is this one
>> USING PERIODIC COMMIT 100
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" 
>> AS c
>> CREATE (:Client { Id: toInt(c.Id), FirstName: c.FirstName, LastName: 
>> c.Lastname, Address: c.Address, ZipCode: toInt(c.ZipCode), Email: 
>> c.Email, 
>> Phone: c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, 
>> Latitude: toFloat(c.Latitude), Longitude: toFloat(c.Longitude), 
>> AgencyId: 
>> toInt(c.AgencyId), R

Re: [Neo4j] Upgrading neo4j 1.9.3 to 2.0.3 fails with Invalid log format version found, expected 3 but was 2

2014-06-17 Thread Michael Hunger
Btw. just got the info that it is fixed and will be part of 2.0.4

https://github.com/neo4j/neo4j/commit/37371aa (Thanks Jake!)

Michael
Am 17.06.2014 um 14:55 schrieb Michael Hunger 
:

> This is a know issue which is currently worked on,
> 
> can you delete the logical log files of the lucene index after your upgrade?
> 
> that means
> 
> rm graph.db/index/lucene.log.*
> 
> and you _might_ need to create a transaction against an index, like creating
> an index and deleting it again, e.g. from java code or the shell.
> 
> db.index().forNodex("foo").delete()
> 
> 
> 
> Thanks a lot
> 
> Michael
> 
> Am 17.06.2014 um 09:59 schrieb Mamta Thakur :
> 
>> Hi,
>> 
>> I have been trying to upgrade neo4j from 1.9.3 to 2.0.3. SDN from 
>> 2.3.1.RELEASE to 3.1.0.RELEASE.
>> 
>> Followed the steps listed @ 
>> http://docs.neo4j.org/chunked/stable/deployment-upgrading.html#explicit-upgrade
>> I try bringing up the server with the upgrade configuration.There are a few 
>> new folders created in the db store. One among which is upgrade_backup and 
>> messages log there says upgrade happened.
>> 
>> 2014-06-17 07:16:55.286+ INFO  [o.n.k.i.DiagnosticsManager]: --- 
>> INITIALIZED diagnostics END ---
>> 2014-06-17 07:17:00.216+ INFO  [o.n.k.i.n.s.StoreFactory]: Starting 
>> upgrade of database store files
>> 2014-06-17 07:17:00.225+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 10% complete
>> 2014-06-17 07:17:00.228+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 20% complete
>> 2014-06-17 07:17:00.231+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 30% complete
>> 2014-06-17 07:17:00.233+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 40% complete
>> 2014-06-17 07:17:00.236+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 50% complete
>> 2014-06-17 07:17:00.239+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 60% complete
>> 2014-06-17 07:17:00.241+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 70% complete
>> 2014-06-17 07:17:00.244+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 80% complete
>> 2014-06-17 07:17:00.247+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 90% complete
>> 2014-06-17 07:17:00.249+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
>> 100% complete
>> 2014-06-17 07:17:03.776+ INFO  [o.n.k.i.n.s.StoreFactory]: Finished 
>> upgrade of database store files
>> 
>> But I get the error with log/index.
>> 
>> Exception when stopping 
>> org.neo4j.index.impl.lucene.LuceneDataSource@42a792f0 
>> org.neo4j.kernel.impl.tran
>> saction.xaframework.IllegalLogFormatException: Invalid log format version 
>> found, expected 3 but was 2. To be able to upgrade from an older log format 
>> version there must have been a clean shutdown of the database
>> java.lang.RuntimeException: 
>> org.neo4j.kernel.impl.transaction.xaframework.IllegalLogFormatException: 
>> Invalid log format version found, expected 3 but wa
>> s 2. To be able to upgrade from an older log format version there must have 
>> been a clean shutdown of the database
>> at 
>> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy$1.reached(LogPruneStrategies.java:250)
>> at 
>> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$AbstractPruneStrategy.prune(LogPruneStrategies.java:78)
>> at 
>> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy.prune(LogPruneStrategies.java:222)
>> at 
>> org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.close(XaLogicalLog.java:742)
>> at 
>> org.neo4j.kernel.impl.transaction.xaframework.LogBackedXaDataSource.stop(LogBackedXaDataSource.java:69)
>> at 
>> org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:310)
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
>> at 
>> org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
>> at 
>> org.neo4j.index.lucene.LuceneKernelExtension.stop(LuceneKernelExtension.java:92)
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
>> at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
>> at 
>> org.neo4j.kernel.extension.KernelExtensions.stop(KernelExtensions.java:124)
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
>> at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
>> at 
>> org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
>> at 
>> org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDat

Re: [Neo4j] 'Using Periodic Commit' throws a invalid syntax exception

2014-06-17 Thread ducky
oh, that was a really useful command when updating large datasets. +1 for 
this feature please. 


On Tuesday, 17 June 2014 14:08:08 UTC+1, Michael Hunger wrote:
>
> Sorry, that feature was removed between M06 and 2.1.0 :(
>
> So what you have to do is to run this repeatedly:
>
> MATCH (a)
>
> LIMIT 1
>
> OPTIONAL MATCH (a)-[r]-()
> DELETE a,r
>
> RETURN count(*);
>
> until it returns 0.
>
> You can try higher limits though, depeding on the number of relationships 
> per node, with 10 rels per node this will be 100k ops, with 100 -> 1M ops.
>
> Michael
>
> Am 17.06.2014 um 15:05 schrieb ducky >:
>
> Hi,
> I am using Neo4j 2.1.2 and when I try to run the example given by Michael 
> Hunger here : 
>
> Query:
> USING PERIODIC COMMIT
> MATCH (a)
> OPTIONAL MATCH (a)-[r]-()
> DELETE a,r;
>
> Error:
> Neo.ClientError.Statement.InvalidSyntax
>
> Invalid input 'M': expected whitespace, comment, an integer or LoadCSVQuery 
> (line 2, column 1)
> "MATCH (a)"
>
>
> My question is: has the use of this command changed since this blog post?
>
> cheers
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] 'Using Periodic Commit' throws a invalid syntax exception

2014-06-17 Thread ducky
Hi,
I am using Neo4j 2.1.2 and when I try to run the example given by Michael 
Hunger here : 

Query:
USING PERIODIC COMMIT
MATCH (a)
OPTIONAL MATCH (a)-[r]-()
DELETE a,r;

Error:
Neo.ClientError.Statement.InvalidSyntax

Invalid input 'M': expected whitespace, comment, an integer or LoadCSVQuery 
(line 2, column 1)
"MATCH (a)"


My question is: has the use of this command changed since this blog post?

cheers

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] Neo4j Spatial 0.13-neo4j-2.1.2 released

2014-06-17 Thread Axel Morgner

Hi,

just released Neo4j Spatial for Neo4j 2.1.2: 
https://github.com/neo4j-contrib/m2/tree/master/releases/org/neo4j/neo4j-spatial/0.13-neo4j-2.1.2


Cheers,
Axel


--

Axel Morgner · a...@morgner.de · @amorgner

CEO Structr (c/o Morgner UG) · Hanauer Landstr. 291a · 60314 Frankfurt · 
Germany

phone: +49 151 40522060 · skype: axel.morgner

http://structr.org - Open Source CMS and Web Framework based on Neo4j, 
won Graphie Award for Most Innovative Open Source Graph Application
structr Mailing List and Forum 

Graph Database Usergroup "graphdb-frankfurt" 



--
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] 'Using Periodic Commit' throws a invalid syntax exception

2014-06-17 Thread Michael Hunger
Sorry, that feature was removed between M06 and 2.1.0 :(

So what you have to do is to run this repeatedly:

> MATCH (a)
LIMIT 1
> OPTIONAL MATCH (a)-[r]-()
> DELETE a,r
RETURN count(*);

until it returns 0.

You can try higher limits though, depeding on the number of relationships per 
node, with 10 rels per node this will be 100k ops, with 100 -> 1M ops.

Michael

Am 17.06.2014 um 15:05 schrieb ducky :

> Hi,
> I am using Neo4j 2.1.2 and when I try to run the example given by Michael 
> Hunger here: 
> 
> Query:
> USING PERIODIC COMMIT
> MATCH (a)
> OPTIONAL MATCH (a)-[r]-()
> DELETE a,r;
> 
> Error:
> Neo.ClientError.Statement.InvalidSyntax
> Invalid input 'M': expected whitespace, comment, an integer or LoadCSVQuery 
> (line 2, column 1)
> "MATCH (a)"
> 
> My question is: has the use of this command changed since this blog post?
> 
> cheers
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] Upgrading neo4j 1.9.3 to 2.0.3 fails with Invalid log format version found, expected 3 but was 2

2014-06-17 Thread Michael Hunger
This is a know issue which is currently worked on,

can you delete the logical log files of the lucene index after your upgrade?

that means

rm graph.db/index/lucene.log.*

and you _might_ need to create a transaction against an index, like creating
an index and deleting it again, e.g. from java code or the shell.

db.index().forNodex("foo").delete()



Thanks a lot

Michael

Am 17.06.2014 um 09:59 schrieb Mamta Thakur :

> Hi,
> 
> I have been trying to upgrade neo4j from 1.9.3 to 2.0.3. SDN from 
> 2.3.1.RELEASE to 3.1.0.RELEASE.
> 
> Followed the steps listed @ 
> http://docs.neo4j.org/chunked/stable/deployment-upgrading.html#explicit-upgrade
> I try bringing up the server with the upgrade configuration.There are a few 
> new folders created in the db store. One among which is upgrade_backup and 
> messages log there says upgrade happened.
> 
> 2014-06-17 07:16:55.286+ INFO  [o.n.k.i.DiagnosticsManager]: --- 
> INITIALIZED diagnostics END ---
> 2014-06-17 07:17:00.216+ INFO  [o.n.k.i.n.s.StoreFactory]: Starting 
> upgrade of database store files
> 2014-06-17 07:17:00.225+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 10% complete
> 2014-06-17 07:17:00.228+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 20% complete
> 2014-06-17 07:17:00.231+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 30% complete
> 2014-06-17 07:17:00.233+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 40% complete
> 2014-06-17 07:17:00.236+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 50% complete
> 2014-06-17 07:17:00.239+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 60% complete
> 2014-06-17 07:17:00.241+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 70% complete
> 2014-06-17 07:17:00.244+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 80% complete
> 2014-06-17 07:17:00.247+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 90% complete
> 2014-06-17 07:17:00.249+ INFO  [o.n.k.i.n.s.StoreFactory]: Store upgrade 
> 100% complete
> 2014-06-17 07:17:03.776+ INFO  [o.n.k.i.n.s.StoreFactory]: Finished 
> upgrade of database store files
> 
> But I get the error with log/index.
> 
> Exception when stopping org.neo4j.index.impl.lucene.LuceneDataSource@42a792f0 
> org.neo4j.kernel.impl.tran
> saction.xaframework.IllegalLogFormatException: Invalid log format version 
> found, expected 3 but was 2. To be able to upgrade from an older log format 
> version there must have been a clean shutdown of the database
> java.lang.RuntimeException: 
> org.neo4j.kernel.impl.transaction.xaframework.IllegalLogFormatException: 
> Invalid log format version found, expected 3 but wa
> s 2. To be able to upgrade from an older log format version there must have 
> been a clean shutdown of the database
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy$1.reached(LogPruneStrategies.java:250)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$AbstractPruneStrategy.prune(LogPruneStrategies.java:78)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy.prune(LogPruneStrategies.java:222)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.close(XaLogicalLog.java:742)
> at 
> org.neo4j.kernel.impl.transaction.xaframework.LogBackedXaDataSource.stop(LogBackedXaDataSource.java:69)
> at 
> org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:310)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
> at org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
> at 
> org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
> at 
> org.neo4j.index.lucene.LuceneKernelExtension.stop(LuceneKernelExtension.java:92)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at 
> org.neo4j.kernel.extension.KernelExtensions.stop(KernelExtensions.java:124)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
> at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
> at 
> org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
> at 
> org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDatabase.java:801)
> at 
> org.springframework.data.neo4j.support.DelegatingGraphDatabase.shutdown(DelegatingGraphDatabase.java:270)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingM

Re: [Neo4j] Hierarchical facets

2014-06-17 Thread Tom Zeppenfeldt
Hi Michael,

have you been able to look at the profiling info that I sent you ? Perhaps
we can have a chat on it tomorrow in Amsterdam ..

Best,

Tom


Met vriendelijke groet / With kind regards



Ir. T. Zeppenfeldt
van der Waalsstraat 30
6706 JR  Wageningen
The Netherlands

Mobile: +31 6 23 28 78 06
Phone: +31 3 17 84 22 17
E-mail: t.zeppenfe...@ophileon.com
Web: www.ophileon.com
Twitter: tomzeppenfeldt
Skype: tomzeppenfeldt


2014-05-21 21:23 GMT+02:00 Michael Hunger 
:

> That's why I suggested UNION.
>
> So the two individual queries take 14s? Still way too long.
>
>
> On Wed, May 21, 2014 at 3:40 PM, Tom Zeppenfeldt  > wrote:
>
>> I have some problems starting the shell from my Mac Terminal (it's giving
>> me a out of mem error) , but form the webadmin powerconsole. Can't find any
>> documentation either on how to set shell to have me return profile.
>>
>> neo4j-sh (?)$ export termname="Eurovoc"
>>
>> *Your first Query*
>> neo4j-sh (?)$ match
>> (j:jurt)-[:HAS_TERM]->()-[:BT*0..]->(t:term)-[:BT]->(t2:term
>> {name:{termname}})  return t.name, count(distinct j) as count  order by
>> count desc limit 10
>>
>> > ;
>> ==> +-+
>> ==> | t.name  | count |
>> ==> +-+
>> ==> | "gezondheidsbeleid" | 1823  |
>> ==> | "overtreding"   | 1393  |
>> ==> | "Europese organisatie"  | 1389  |
>> ==> | "EU-instantie"  | 1323  |
>> ==> | "mondiale organisatie"  | 1277  |
>> ==> | "gespecialiseerde instelling van de VN" | 1143  |
>> ==> | "handeling van de EU"   | 1129  |
>> ==> | "internationaal publiekrecht"   | 1091  |
>> ==> | "sociaal beleid"| 971   |
>> ==> | "rechtsvordering"   | 915   |
>> ==> +-+
>> ==> 10 rows
>> *==> 8775 ms*
>>
>> *Your second Query*
>> neo4j-sh (?)$ match (j:jurt)-[:HAS_TERM]->()-[:BT*0..]->(t2:term
>> {name:{termname}})  return t.name, count(distinct j) as count  order by
>> count desc limit 10;
>> ==> SyntaxException: t not defined (line 1, column 79)
>> ==> "match (j:jurt)-[:HAS_TERM]->()-[:BT*0..]->(t2:term
>> {name:{termname}})  return t.name, count(distinct j) as count  order by
>> count desc limit 10"
>> ==>
>>  ^
>> neo4j-sh (?)$ match (j:jurt)-[:HAS_TERM]->()-[:BT*0..]->(t2:term
>> {name:{termname}})  return t2.name, count(distinct j) as count  order by
>> count desc limit 10;
>> ==> +---+
>> ==> | t2.name   | count |
>> ==> +---+
>> ==> | "Eurovoc" | 9576  |
>> ==> +---+
>> ==> 1 row
>> *==> 3668 ms*
>>
>>
>> But what I need is to include the docs on both the term I request and the
>> count on its children, like this. I notice that the combination takes
>> longer than the two separate queries combined.
>>
>> neo4j-sh (?)$ match
>> (j:jurt)-[:HAS_TERM]->()-[:BT*0..]->(t:term)-[:BT*0..1]->(t2:term
>> {name:{termname}})  return t.name, count(distinct j) as count  order by
>> count desc limit 10;
>> ==> +-+
>> ==> | t.name  | count |
>> ==> +-+
>> ==> | "Eurovoc"   | 9576  |
>> ==> | "gezondheidsbeleid" | 1823  |
>> ==> | "overtreding"   | 1393  |
>> ==> | "Europese organisatie"  | 1389  |
>> ==> | "EU-instantie"  | 1323  |
>> ==> | "mondiale organisatie"  | 1277  |
>> ==> | "gespecialiseerde instelling van de VN" | 1143  |
>> ==> | "handeling van de EU"   | 1129  |
>> ==> | "internationaal publiekrecht"   | 1091  |
>> ==> | "sociaal beleid"| 971   |
>> ==> +-+
>> ==> 10 rows
>> *==> 17802 ms*
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to neo4j+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Michael Hunger
if they don't have a value for city id, do they then have empty columns there 
still? like "user-id,,

You probably want to filter these rows?

>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
WHERE coalesce(c.CitiId,"") <> ""
...

Am 17.06.2014 um 11:23 schrieb Paul Damian :

> Well, the csv file contains some rows that do not have a value for CityId, 
> and the rows are unique regarding the clientID. There are 11M clients living 
> in 14K Cities. Is there a limit of links/node?
> Now I've created a piece of code that reads from file and creates each 
> relationship, but, as you can imagine, it works really slow in this scenario.
>  
> did you create an index on :Client(Id) and :City(Id)
> 
> what happens if you do:
> 
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>  MATCH (client: Client { Id: toInt(c.Id)})
> RETURN count(*)
> 
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>  MATCH (city: City { Id: toInt(c.CityId)})
> RETURN count(*)
> 
> each count should be equivalent to the # of rows in the file.
> 
> Michael
> 
> Am 16.06.2014 um 17:47 schrieb Paul Damian :
> 
>> Somehow I've managed to load all the nodes and now I'm trying to load the 
>> links as well. I read the nodes from csv file and create the relation 
>> between them. I run the following command:
>> USING PERIODIC COMMIT 100 
>>  LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS c
>>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
>> toInt(c.CityId)})
>>  CREATE (client)-[r:LOCATED_IN]->(city)
>> 
>> Running with a smaller commit size returns this error 
>> Neo.DatabaseError.Statement.ExecutionFailure, while increasing the commit 
>> size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
>> Can you help me with this?
>> 
>> 
>> joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
>> Perhaps something with field or line terminators?
>> 
>> I assume it blows up the field separation.
>> 
>> Try to run:
>> 
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS c
>> RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: c.Lastname, 
>> Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, Phone: 
>> c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, Latitude: 
>> toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
>> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
>> LIMIT 3
>> 
>> 
>> 
>> On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  wrote:
>> I've tried using the shell and I get the same results: nodes with no 
>> properties.
>> I've created the csv file using MsSQL Server Export. Is it relevant?
>> 
>> About you curiosity: I figured I would import first the nodes, then the 
>> relationships from the connection tables. Am I doing it wrong?
>> 
>> Thanks
>> 
>> joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:
>> I'd probably use a commit size in your case of 50k or 100k.
>> 
>> Try to use the neo4j-shell and not the web-interface.
>> 
>> Connect to neo4j using bin/neo4j-shell
>> 
>> Then run your commands ending with a semicolon.
>> 
>> Just curious: Your data is imported as one node per row? That's not really a 
>> graph structure.
>> 
>> 
>> 
>> 
>> On Wed, Jun 4, 2014 at 6:56 PM, Paul Damian  wrote:
>> Hi there,
>> 
>> I'm experimenting with Neo4j while benchmarking a bunch of NoSQL databases 
>> for my graduation paper. 
>> I'm using the web interface to populate the database. I've been able to load 
>> the smaller tables from my SQL database and LOAD CSV works fine.
>> By small, I mean a few columns (4-5) and some rows (1 million). However, 
>> when I try to upload a larger table (15 columns, 12 million rows), it 
>> creates the nodes but it doesn't set any properties.
>> I've tried to reduce the number of records (to 100) and also the number of 
>> columns( just the Id property ), but no luck so far.
>> 
>> The cypher command used is this one
>> USING PERIODIC COMMIT 100
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS c
>> CREATE (:Client { Id: toInt(c.Id), FirstName: c.FirstName, LastName: 
>> c.Lastname, Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, 
>> Phone: c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, 
>> Latitude: toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
>> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)})
>> 
>> Any help and indication is welcomed,
>> Paul
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to neo4j+un...@googlegroups.com.
>> 
>> For more options, visit https://groups.google.com/d/optout.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving e

[Neo4j] Upgrading neo4j 1.9.3 to 2.0.3 fails with Invalid log format version found, expected 3 but was 2

2014-06-17 Thread Mamta Thakur
Hi,

I have been trying to upgrade neo4j from 1.9.3 to 2.0.3. SDN 
from 2.3.1.RELEASE to 3.1.0.RELEASE.

Followed the steps listed 
@ 
http://docs.neo4j.org/chunked/stable/deployment-upgrading.html#explicit-upgrade
I try bringing up the server with the upgrade configuration.There are a few 
new folders created in the db store. One among which is upgrade_backup and 
messages log there says upgrade happened.

2014-06-17 07:16:55.286+ INFO  [o.n.k.i.DiagnosticsManager]: --- 
INITIALIZED diagnostics END ---
2014-06-17 07:17:00.216+ INFO  [o.n.k.i.n.s.StoreFactory]: Starting 
upgrade of database store files
2014-06-17 07:17:00.225+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 10% complete
2014-06-17 07:17:00.228+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 20% complete
2014-06-17 07:17:00.231+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 30% complete
2014-06-17 07:17:00.233+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 40% complete
2014-06-17 07:17:00.236+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 50% complete
2014-06-17 07:17:00.239+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 60% complete
2014-06-17 07:17:00.241+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 70% complete
2014-06-17 07:17:00.244+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 80% complete
2014-06-17 07:17:00.247+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 90% complete
2014-06-17 07:17:00.249+ INFO  [o.n.k.i.n.s.StoreFactory]: Store 
upgrade 100% complete
2014-06-17 07:17:03.776+ INFO  [o.n.k.i.n.s.StoreFactory]: Finished 
upgrade of database store files

But I get the error with log/index.

Exception when stopping 
org.neo4j.index.impl.lucene.LuceneDataSource@42a792f0 
org.neo4j.kernel.impl.tran
saction.xaframework.IllegalLogFormatException: Invalid log format version 
found, expected 3 but was 2. To be able to upgrade from an older log format 
version there must have been a clean shutdown of the database
java.lang.RuntimeException: 
org.neo4j.kernel.impl.transaction.xaframework.IllegalLogFormatException: 
Invalid log format version found, expected 3 but wa
s 2. To be able to upgrade from an older log format version there must have 
been a clean shutdown of the database
at 
org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy$1.reached(LogPruneStrategies.java:250)
at 
org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$AbstractPruneStrategy.prune(LogPruneStrategies.java:78)
at 
org.neo4j.kernel.impl.transaction.xaframework.LogPruneStrategies$TransactionTimeSpanPruneStrategy.prune(LogPruneStrategies.java:222)
at 
org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.close(XaLogicalLog.java:742)
at 
org.neo4j.kernel.impl.transaction.xaframework.LogBackedXaDataSource.stop(LogBackedXaDataSource.java:69)
at 
org.neo4j.index.impl.lucene.LuceneDataSource.stop(LuceneDataSource.java:310)
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.shutdown(LifeSupport.java:547)
at 
org.neo4j.kernel.lifecycle.LifeSupport.remove(LifeSupport.java:339)
at 
org.neo4j.kernel.impl.transaction.XaDataSourceManager.unregisterDataSource(XaDataSourceManager.java:272)
at 
org.neo4j.index.lucene.LuceneKernelExtension.stop(LuceneKernelExtension.java:92)
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at 
org.neo4j.kernel.extension.KernelExtensions.stop(KernelExtensions.java:124)
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:527)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:155)
at 
org.neo4j.kernel.lifecycle.LifeSupport.shutdown(LifeSupport.java:185)
at 
org.neo4j.kernel.InternalAbstractGraphDatabase.shutdown(InternalAbstractGraphDatabase.java:801)
at 
org.springframework.data.neo4j.support.DelegatingGraphDatabase.shutdown(DelegatingGraphDatabase.java:270)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.springframework.beans.factory.support.DisposableBeanAdapter.invokeCustomDestroyMethod(DisposableBeanAdapter.java:327)
at 
org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:253)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:510)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingleton

[Neo4j] Re: Neo4j database ALWAYS shuts down incorrectly if start/stop as a service from a list of windows services

2014-06-17 Thread Denys Hryvastov
Here is a stack trace that I get when I try to do upgrade from 1.9.5 to 2.0:

2014-06-17 09:48:27.319+ INFO  [API] Setting startup timeout to: 
12ms based on -1
Detected incorrectly shut down database, performing recovery..
2014-06-17 09:48:28.108+ DEBUG [API]
org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: 
Error starting org.neo4j.kernel.EmbeddedGraphDatabase, 
D:\Neo4j\neo4j-enterprise-2.0.0\data\graph.db
at 
org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:209) 
~[neo4j-server-2.0.0.jar:2.0.0]
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:87) 
[neo4j-server-2.0.0.jar:2.0.0]
at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:50) 
[neo4j-server-2.0.0.jar:2.0.0]
Caused by: java.lang.RuntimeException: Error starting 
org.neo4j.kernel.EmbeddedGraphDatabase, 
D:\Neo4j\neo4j-enterprise-2.0.0\data\graph.db
at 
org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:333)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.EmbeddedGraphDatabase.(EmbeddedGraphDatabase.java:63) 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:92)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:198)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.impl.recovery.StoreRecoverer.recover(StoreRecoverer.java:115) 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.server.preflight.PerformRecoveryIfNecessary.run(PerformRecoveryIfNecessary.java:59)
 
~[neo4j-server-2.0.0.jar:2.0.0]
at 
org.neo4j.server.preflight.PreFlightTasks.run(PreFlightTasks.java:70) 
~[neo4j-server-2.0.0.jar:2.0.0]
at 
org.neo4j.server.AbstractNeoServer.runPreflightTasks(AbstractNeoServer.java:319)
 
~[neo4j-server-2.0.0.jar:2.0.0]
at 
org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:144) 
~[neo4j-server-2.0.0.jar:2.0.0]
... 2 common frames omitted
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 
'org.neo4j.kernel.impl.transaction.XaDataSourceManager@2b1eb67d' was 
successfully initialized, but failed to start.
Please see attached cause exception.
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:504)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115) 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:310)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
... 10 common frames omitted
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 
'org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource@1bf5df6a' was 
successfully initialized, but failed to start. P
lease see attached cause exception.
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:504)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115) 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.impl.transaction.XaDataSourceManager.start(XaDataSourceManager.java:164)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:498)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
... 12 common frames omitted
Caused by: 
org.neo4j.kernel.impl.storemigration.StoreUpgrader$UpgradingStoreVersionNotFoundException:
 
'neostore' does not contain a store version, please ensure that the 
original datab
ase was shut down in a clean state.
at 
org.neo4j.kernel.impl.storemigration.UpgradableDatabase.checkUpgradeable(UpgradableDatabase.java:85)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.impl.storemigration.StoreUpgrader.attemptUpgrade(StoreUpgrader.java:72)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.impl.nioneo.store.StoreFactory.tryToUpgradeStores(StoreFactory.java:143)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.impl.nioneo.store.StoreFactory.newNeoStore(StoreFactory.java:123)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource.start(NeoStoreXaDataSource.java:323)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
at 
org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:498)
 
~[neo4j-kernel-2.0.0.jar:2.0.0]
... 15 common frames omitted
2014-06-17 09:48:28.109+ DEBUG [API] Failed to start Neo Server on port 
[7474]

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] LOAD CSV creates nodes but does not set properties

2014-06-17 Thread Paul Damian
Well, the csv file contains some rows that do not have a value for CityId, 
and the rows are unique regarding the clientID. There are 11M clients 
living in 14K Cities. Is there a limit of links/node?
Now I've created a piece of code that reads from file and creates each 
relationship, but, as you can imagine, it works really slow in this 
scenario.
 

> did you create an index on :Client(Id) and :City(Id)
>
> what happens if you do:
>
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
> c
>  MATCH (client: Client { Id: toInt(c.Id)})
>
> RETURN count(*)
>
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" AS 
> c
>  MATCH (city: City { Id: toInt(c.CityId)})
>
> RETURN count(*)
>
> each count should be equivalent to the # of rows in the file.
>
> Michael
>
> Am 16.06.2014 um 17:47 schrieb Paul Damian  >:
>
> Somehow I've managed to load all the nodes and now I'm trying to load the 
> links as well. I read the nodes from csv file and create the relation 
> between them. I run the following command:
> USING PERIODIC COMMIT 100 
>  LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/LOCATED_IN.csv" 
> AS c
>  MATCH (client: Client { Id: toInt(c.Id)}), (city: City { Id: 
> toInt(c.CityId)})
>  CREATE (client)-[r:LOCATED_IN]->(city)
>
> Running with a smaller commit size returns this error 
> Neo.DatabaseError.Statement.ExecutionFailure, while increasing the commit 
> size to 1 throws Neo.DatabaseError.General.UnknownFailure. 
> Can you help me with this?
>
>
> joi, 5 iunie 2014, 12:05:18 UTC+3, Michael Hunger a scris:
>>
>> Perhaps something with field or line terminators?
>>
>> I assume it blows up the field separation.
>>
>> Try to run:
>>
>> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" AS c
>> RETURN { Id: toInt(c.Id), FirstName: c.FirstName, LastName: c.Lastname, 
>> Address: c.Address, ZipCode: toInt(c.ZipCode), Email: c.Email, Phone: 
>> c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, Latitude: 
>> toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
>> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)} as data, c as line
>> LIMIT 3
>>
>>
>>
>> On Thu, Jun 5, 2014 at 10:51 AM, Paul Damian  wrote:
>>
>>> I've tried using the shell and I get the same results: nodes with no 
>>> properties.
>>> I've created the csv file using MsSQL Server Export. Is it relevant?
>>>
>>> About you curiosity: I figured I would import first the nodes, then the 
>>> relationships from the connection tables. Am I doing it wrong?
>>>
>>> Thanks
>>>
>>> joi, 5 iunie 2014, 09:54:31 UTC+3, Michael Hunger a scris:

 I'd probably use a commit size in your case of 50k or 100k.

 Try to use the neo4j-shell and not the web-interface.

 Connect to neo4j using bin/neo4j-shell

 Then run your commands ending with a semicolon.

 Just curious: Your data is imported as one node per row? That's not 
 really a graph structure.




 On Wed, Jun 4, 2014 at 6:56 PM, Paul Damian  
 wrote:

> Hi there,
>
> I'm experimenting with Neo4j while benchmarking a bunch of NoSQL 
> databases for my graduation paper. 
> I'm using the web interface to populate the database. I've been able 
> to load the smaller tables from my SQL database and LOAD CSV works fine.
> By small, I mean a few columns (4-5) and some rows (1 million). 
> However, when I try to upload a larger table (15 columns, 12 million 
> rows), 
> it creates the nodes but it doesn't set any properties.
> I've tried to reduce the number of records (to 100) and also the 
> number of columns( just the Id property ), but no luck so far.
>
> The cypher command used is this one
> USING PERIODIC COMMIT 100
> LOAD CSV WITH HEADERS FROM "file:/Users/pauld/Documents/Client.csv" 
> AS c
> CREATE (:Client { Id: toInt(c.Id), FirstName: c.FirstName, LastName: 
> c.Lastname, Address: c.Address, ZipCode: toInt(c.ZipCode), Email: 
> c.Email, 
> Phone: c.Phone, Fax: c.Fax, BusinessName: c.BusinessName, URL: c.URL, 
> Latitude: toFloat(c.Latitude), Longitude: toFloat(c.Longitude), AgencyId: 
> toInt(c.AgencyId), RowStatus: toInt(c.RowStatus)})
>
> Any help and indication is welcomed,
> Paul
>
> -- 
> You received this message because you are subscribed to the Google 
> Groups "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to neo4j+un...@googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.
>


>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
> -- 
> You received this m

Re: [Neo4j] Re: dijkstra bidirectional

2014-06-17 Thread Mattias Persson
Sorry for not replying but time is a scarse resource :( I don't expect my
getting time for this the closest months. Perhaps there are others willing
to help out!

Take care
Best,
Mattias
Den 10 jun 2014 12:06 skrev "Antonio Grimaldi" <
antonio.grimaldim...@gmail.com>:

> Is* org.neo4j.graphalgo.impl.shortestpath.Dijkstra a *Bidirectional
> Dijkstra's implementation???
>
> Il giorno giovedì 8 maggio 2014 17:04:32 UTC+2, Antonio Grimaldi ha
> scritto:
>>
>> Hi,
>> Is there an implementation of Bidirectional Dijkstra?
>>
>> Thanks
>> Antonio
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] write unit test for neo4j insert in php

2014-06-17 Thread Michael Hunger
The same way as you insert data, 

you write a cypher statement that verifies the existence of your inserted data 
and perhaps another one that verifies the absence of other data.

Michael

Am 17.06.2014 um 09:37 schrieb ishanka samaraweera 
:

> I have a function for insert data to neo4j database.
> I want to write unit test and I use laravel framework.
> I want to test whether that data has been inserted to the database.
> how can I do that.
> Thanks in advance.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] HTTP Status Codes and Errors

2014-06-17 Thread Michael Hunger
Yes it is, as the results are streamed, the headers are already sent out 
immediately before query execution happens.

Also as you can send many queries, the error contains more information on which 
(the last) query it happened.

There is an error field in the response.

Not sure how much effort it would be to rewrite the impl to wait for the first 
query to start and on failure during parsing/semantic check send an appropriate 
error code (the query results are also streamed directly so if an error happens 
during fetching query results the same as above applies).

But then the behavior would be inconsistent depending on your offending query 
is the first or second which would be even more confusing.

Michael

Am 16.06.2014 um 23:07 schrieb Hadi Hariri :

> Hi,
> 
> When using the transactional endpoint, when an error occurs, such as for 
> instance a unique constraint violation, the status code returned is still 
> 200. Is this by design? Is the preferred method to always have to examine 
> errors property in body to make sure it's empty to guarantee success? 
> 
> Thanks.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] write unit test for neo4j insert in php

2014-06-17 Thread ishanka samaraweera
I have a function for insert data to neo4j database.
I want to write unit test and I use laravel framework.
I want to test whether that data has been inserted to the database.
how can I do that.
Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Neo4j] HTTP Status Codes and Errors

2014-06-17 Thread Hadi Hariri
Hi,

When using the transactional endpoint, when an error occurs, such as for 
instance a unique constraint violation, the status code returned is still 
200. Is this by design? Is the preferred method to always have to examine 
errors property in body to make sure it's empty to guarantee success? 

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.