ndex by default and you have to force
> cypher to use the other index too with "USING INDEX grp:Group(name)
>
>
> On Fri, Jun 6, 2014 at 9:46 PM, Eric Olson > wrote:
>
>> Yes, I also tried USING PERIODIC COMMIT with 1 and 5 values.
>>
>> Yes, a
FYI - This is using Neo4j 2.1.2 personal enterprise
On Thursday, June 26, 2014 9:32:10 AM UTC-6, Eric Olson wrote:
>
> I have run into a unique situation (bug?). Here is my query that does what
> it is supposed to:
>
> MATCH
>(:Author {name: {author} })-[:WROTE_BOOK]-
I have run into a unique situation (bug?). Here is my query that does what
it is supposed to:
MATCH
(:Author {name: {author} })-[:WROTE_BOOK]->(b:Book {name_min: {book}
})-[:HAS_CHAPTER]->(c:Chapter {chapter: {chapter} })-[:HAS_TEXT]->(t:Text),
(lang:Language {language: {lang} }),
(vers
t; add one foreach for each potential value.
>
> Michael
>
> Am 24.06.2014 um 18:43 schrieb Eric Olson >:
>
> Using
>
> USING PERIODIC COMMIT 5
> LOAD CSV WITH HEADERS FROM 'file:/mcpdata/8_grp-aco.csv' AS line
> MATCH (group:Group { name: line.group }
Using
USING PERIODIC COMMIT 5
LOAD CSV WITH HEADERS FROM 'file:/mcpdata/8_grp-aco.csv' AS line
MATCH (group:Group { name: line.group }), (aco:ACO { name: line.aco })
CREATE (group)-[:AXO { line.axo: true }]->(aco)
fails because as you can see on the last line I am trying to set a property
NA
Lol. True :) I will do it the right way next time. Thanks for the lesson :)
On Thursday, June 12, 2014 2:34:09 PM UTC-6, Wes Freeman wrote:
>
> That will work, until you have a tab in that arbitrary text. :) Then
> you'll need quotes again.
>
> Wes
>
> On Thu, Jun 12, 2
break, double-quote, and/or commas *should* be
>quoted. (If they are not, the file will likely be impossible to process
>correctly).
>
>
> id,another_data,text
> "1234","data","This can have commas, so this part never gets imported!"
&g
I am trying to import a large amount of data using Cypher's new LOAD CSV
tool. The problem is that one of my properties will contain some arbitrary
text which contains a lot of commas in itself. A basic picture of my data
would look like:
{
"id": 1234,
"another_data": "data",
"text": "Thi
Are you saying that we can just swap out the configuration files while the
instance is running and it will pick up on the changes?
On Thursday, December 12, 2013 4:36:48 AM UTC-7, Peter Neubauer wrote:
>
> This is not true (anymore).
>
> Since we switched to Paxos instead of Zookeeper, you should
harding_with_haproxy
>
> Another recommendation is to direct writes to master and reads to slaves,
> and we have this:
> http://docs.neo4j.org/chunked/stable/ha-haproxy.html#_optimizing_for_reads_and_writes
>
>
> Hope that helps?
>
> - Lasse
>
>
>
>
>
&g
I am playing with a personal, enterprise license of v2.1.1. I have set up a
HA cluster (3 instances) on my local machine. The cluster is up and running
as verified through the webadmin console and writes to master and slaves
are populating through out the cluster as expected. Basically, it all w
e neo4j-shell of your import of a
> tiny variant?
>
> e.g. your 10k file?
>
> I could imagine it only uses one index by default and you have to force
> cypher to use the other index too with "USING INDEX grp:Group(name)
>
>
> On Fri, Jun 6, 2014 at 9:46 PM, Eric
simply CREATEs nodes?
On Friday, June 6, 2014 12:10:54 PM UTC-6, Michael Hunger wrote:
>
> How did it fail?
>
> Did you try USING PERIODIC COMMIT 1 ?
>
> Do you have an index for : :User(name) and :Group(name) ?
>
>
> On Fri, Jun 6, 2014 at 12:34 AM, Eric Olson >
I have read some other topics on this and am still coming up short on a
satisfying solution.
I am:
- Populating my DB using the new CSV import query in Cypher
- Using the Neo4j shell
- Including the "USING PERIODIC COMMIT" statement
I have:
- Successfully imported a 10,000 line fil
14 matches
Mail list logo