If you didn't miss tx.success() and used the same try-with-resource pattern as
Stefan it must work.
Except if you're swallowing an exception.
Michael
Am 30.08.2014 um 01:16 schrieb Alireza Rezaei Mahdiraji :
> Hi Michael,
>
> Yeah I printed the nodes' info before deleting and it shows them.
Hi Michael,
Yeah I printed the nodes' info before deleting and it shows them. I have not
print the relationships though, I think nodes would be enough, right?
Thanks,
Alireza
On Saturday, August 30, 2014 12:25:20 AM UTC+2, Michael Hunger wrote:
>
> Can you print out the node-ids and rel-ids tha
Did you create a schema index?
Labels and property names are case sensitive, you use "USER" (without s) in
your batch import but "Users" (with s and different caps in cypher)
create index on :Users(id);
otherwise see for the difference between legacy and schema indexes
http://nigelsmall.com/ne
Can you print out the node-ids and rel-ids that it iterates over?
Did you use the correct label name
Michael
Am 30.08.2014 um 00:22 schrieb Alireza Rezaei Mahdiraji :
>
> Hi Stefan,
>
> I tied the Java code, it seems running but the nodes and relationships are
> not getting deleted.
> Also
Hi Stefan,
I tied the Java code, it seems running but the nodes and relationships are
not getting deleted.
Also, there is no error. Any idea?
Thanks,
Alireza
On Friday, August 29, 2014 4:59:32 PM UTC+2, Stefan Armbruster wrote:
>
> Since you can run cypher from java the following would be a
Yep, I should remove that note, when I wrote it back then the upgrade process
was more straightforward.
I would have to go over all them and upgrade them for the different version,
not sure I find the time in the next days.
I'll see if I can write a script that does it for all of them and uploa
Perhaps you don't have enough heap on the machine, check the batch file on how
much it allocates.
And check batch.properties so that the sum of the mmio-memory fits within your
heap with some to spare (windows only)
Windows sucks for mmio-memory management anyway :( and fast imports as well
Li
Have the log file. Busy. more later
On Friday, August 29, 2014 2:43:26 PM UTC-6, Michael Hunger wrote:
>
> Neo4j Embedded runs in the JVM that you use it in and it uses the heap of
> that VM.
>
> Please share what you do in more detail so that we have any chance to help
> you out, right no it is
Alright, it was actually that with the copy I get something different to
tabs.
But now I get:
>import.bat test.db nodes.csv rels.csv
Error occurred during initialization of VM
Could not reserve enough space for object heap
Am Freitag, 29. August 2014 10:16:42 UTC+2 schrieb Curtis Mosters:
>
>
Cypher won't be faster than the Java API, only the batch-inserter is faster
than the java API
Cypher is just more convenient.
Do you have any long property strings?
Most of the time is spent in the label store and in the index updates, if you
run the same without labels and indexes it will be f
neo4j-sh (?)$ USING PERIODIC COMMIT 1
> LOAD CSV WITH HEADERS FROM "file:///C:/test/tls206_part01.txt" AS csvLine
WITH
csvLine LIMIT 10
> CREATE (p:Person { person_id: toInt(csvLine.person_id), doc_std_name_id:
csvLi
ne.doc_std_name_id , person_name: csvLine.person_name });
+
Which Neo4j version are you running?
Can you supply the messages.log files from all 3 instances?
To my knowledge in a 3 instance cluster, you cannot get a master after one
instance leaves, as you only have 2 left and no quorum (minimum 3).
Michael
Am 29.08.2014 um 15:10 schrieb Patrick Lewando
And what kind of documents do you want to handle? Files/binary docs, or
f.e. JSON documents, aggregated from database entities?
-Axel.
Am 29.08.2014 um 23:20 schrieb Michael Hunger:
what kinds of relationships do you want to maintain?
and how are the documents related to the entities or the re
what kinds of relationships do you want to maintain?
and how are the documents related to the entities or the relationships?
if I understand you correctly
(entity1)-[:SOME_KIND_OF_REL]->(entity2)
(entity1)<-[:ATTACHED]-(document1)
(entity2)<-[:ATTACHED]-(document2)
(entity3)<-[:ATTACHED]-(docume
Hi,
I am working on a design of a system in which I need to maintain
relationship between various kinds of real world entities and also have to
handle documents(can have one or more attachments with 1 entity).
I am little confused with the database choice and need help in identifying
the pros a
Let's assume I have a 2 instance Neo4j cluster and 1 arbiter. Instance 1 is
the master and instance 2 is the slave. If I bring down instance 1, then
instance 2 becomes the new master. However, when I bring instance 1 back
up, it neither becomes a master or slave. The console log shows that it's
Neo4j Embedded runs in the JVM that you use it in and it uses the heap of that
VM.
Please share what you do in more detail so that we have any chance to help you
out, right no it is too much vague information.
Cheers,
Michael
Am 29.08.2014 um 20:57 schrieb José Cornado :
> Yes, I doubled ecl
Right now you can't.
What you can do is to increase (e.g. double) the length of the path (as each of
the first 20 could be deleted) and add a limit 20 at the end, but it won't get
faster that way.
But good question.
Michael
Am 18.07.2014 um 16:36 schrieb Dissolubilis :
> I have some linked l
Yes, I doubled eclipse's heap and it broke the 100,000k barrier. I
increased the size of the backend heap (where all the data goes) to see if
it can reach the 200,000.
A more succinct question: does neo(embedded) create a process of its
own(thus creating another heap) that can be managed with p
I understand, and I have corrected that. Unfortunately I still have the
issue, but if this is feature complete I will work on it some more and
failing to find a solution I will ask the question over on SO. For
completeness my Index is created like this:
CREATE CONSTRAINT ON (n:`UniqueId`) ASSE
Sure if you just _create_ nodes depending on your heap you can also create 1M
nodes in one tx
Sent from mobile device
Am 29.08.2014 um 17:47 schrieb "'Curtis Mosters' via Neo4j"
:
> Okay but still that version is faster overall. Ok you get faster the GC
> error. That's right. Well compared to
This is already tiny heap for eclipse without neo
Sent from mobile device
Am 29.08.2014 um 18:56 schrieb José Cornado :
> I am using 2.13 and have 2GB of memory.
>
> Eclipse runs with these arguments:
>
> -Dosgi.requiredJavaVersion=1.7 -XstartOnFirstThread
> -Dorg.eclipse.swt.internal.carbon
Please share your neo code and more about your datastructure
And your graph.db/messages.log
Sent from mobile device
Am 29.08.2014 um 18:31 schrieb José Cornado :
> Hello!
>
> I have been running into the following problem lately (I swear that it didn't
> happen before):
>
> I have a tool tha
Currently your merge op uses two labels :UniqueId and :_UniqueId
Merge only supports guarantees for one label one prop
You can set the secon label in on create
Sent from mobile device
Am 29.08.2014 um 16:30 schrieb Mark Findlater :
> Thanks Michael.
>
> Sorry I should have been more clear, th
I am using 2.13 and have 2GB of memory.
Eclipse runs with these arguments:
-Dosgi.requiredJavaVersion=1.7 -XstartOnFirstThread
-Dorg.eclipse.swt.internal.carbon.smallFonts -XX:MaxPermSize=256m -Xms40m
-Xmx512m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread
-Dorg.eclipse.swt.intern
Hello!
I have been running into the following problem lately (I swear that it
didn't happen before):
I have a tool that lives inside an eclipse feature and the tool uses an
embedded instance of neo as its store.
The data itself looks like a broom (several not only one)
To retrieve nodes at t
While I can't share the code I can say that we have our our batch insert
process. It actually does searches first to determine if we are updating or
creating a given node. We run batch inserts of about 40-50k nodes pet tx. We
are loading several thousand of nodes per second. Overall times vary w
Okay but still that version is faster overall. Ok you get faster the GC
error. That's right. Well compared to OrientDB it's very slow. Overall I
can say that currently the inserting of really big files can only be done
with your tool Michael.
As I said will test this evening, hopefully working
Since you can run cypher from java the following would be a valid (but
probably not expected) answer:
new ExecutionEngine(graphDb).execute("match (n:MyLabel) optional
match(n)-[r]-() delete r,n");
In pure Java:
try (Transaction tx=graphDb.beginTx()) {
for (Node n:
GlobalGraphOperations.at(grap
Thanks Michael.
Sorry I should have been more clear, there is already a Unique constraint
on the 'type' property. What do you mean by "And only the label from the
constraint"?
On Friday, 29 August 2014 15:25:35 UTC+1, Michael Hunger wrote:
>
> You need a unique constraint for this to work
>
>
You need a unique constraint for this to work
And only the label from the constraint
Sent from mobile device
Am 29.08.2014 um 15:36 schrieb Mark Findlater :
> Using Neo4J embedded version 2.1.3 and Spring Data Neo4J 3.1.4.RELEASE and
> seeing odd behaviour when calling MERGE from multiple th
Using Neo4J embedded version 2.1.3 and Spring Data Neo4J 3.1.4.RELEASE and
seeing odd behaviour when calling MERGE from multiple threads. Is it
expected that concurrent merge operations (with the same values) will
result in a single unique node and will operations that use the ON CREATE
and ON
There is sth wrong then it should work with little memory
Can you share your full, split up import script that you used?
Michael
Sent from mobile device
Am 29.08.2014 um 14:01 schrieb Chris Roberts :
> Splitting it up worked well, I still had to give my VM 32 GB of memory and 28
> GB heap, bu
Actually looking at your old version again you use just one single tx
Repeated tx.success() don't have an effect
Sent from mobile device
Am 29.08.2014 um 10:57 schrieb "'Curtis Mosters' via Neo4j"
:
> Tested in AWS on Intel Xeon CPU E5-2670 v2 @ 2,5 GHz.
>
> With the addition I need for 1 mio
Hi Arnaud,
If you are using the Java API, then there are several ways to find objects
in the index. It sounds like you want something like 'find all Geometries
that are contained within this area'? Or perhaps intersecting an area? Can
you explain in more detail what your actual query is, then I ca
Splitting it up worked well, I still had to give my VM 32 GB of memory and
28 GB heap, but the files I was importing were more than 50MB each, the
largest being 165MB with about 5 million rows, which took maybe 4 minutes
to import. I just didn't expect to need so much memory, there is a table in
th
I haven't watched neo4j run for long periods of time, but I would try
this to start:
- put everything on HDD
- shut down Neo4j
- move everything under graph.db, keystore, and possibly rrd on SSD
- create symlinks for the graph.db directory and the keystore and
possibly rrd files
Hi All,
How can I remove all nodes (and their relationships) with a given label
using Java?
Thanks,
Alireza
--
You received this message because you are subscribed to the Google Groups
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to neo4j+un
Tested in AWS on Intel Xeon CPU E5-2670 v2 @ 2,5 GHz.
With the addition I need for 1 mio lines ~100 sec
With my old version of inserting every line it was 14 sec.
So somehow batch commiting is much slower.
Am Donnerstag, 28. August 2014 22:18:46 UTC+2 schrieb Michael Hunger:
>
>
> Am 28.08.2014
Well I just copied the examples from the website. I think that was the
problem of just copying it. Will test this evening. Thanks.
Am Donnerstag, 28. August 2014 22:20:48 UTC+2 schrieb Michael Hunger:
>
> What is the given data?
>
> It seems that you use spaces instead of tabs for separation?
>
>
Ohh didn't know that this was a command. Sorry. Could you maybe let it be
code next time. So it's much easier too see. Will test it this evening.
Thanks.
Am Donnerstag, 28. August 2014 22:15:55 UTC+2 schrieb Michael Hunger:
>
> did you see the _WITH csvLine_ before the limit?
>
> Am 28.08.2014 u
41 matches
Mail list logo