On Fri, Aug 26, 2011 at 2:46 PM, jadell wrote:
> Jim,
>
> Fair enough. For now, I'll just know not to try and make batches that big
> :-) My own use case is for the transaction safety rather than trying to
> create thousands of entities at once, so it doesn't effect me that much. I
> just wante
Jim,
Fair enough. For now, I'll just know not to try and make batches that big
:-) My own use case is for the transaction safety rather than trying to
create thousands of entities at once, so it doesn't effect me that much. I
just wanted to have something more concrete to tell other users who mi
Hey Josh,
I wonder whether we have a memory leak in that code, or whether Jackson has.
I'll drop this into the community backlog for further investigation.
Jim
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user
I bumped the maxmemory up to 512 and ran a batch to create 10 nodes
(repeated 10 times). After an average of 20 seconds, I always received the
following response:
HTTP/1.1 100 Continue
HTTP/1.1 500 Java heap space
Content-Type: text/html; charset=iso-8859-1
Cache-Control: must-revalidate,no
Jim,
When I was running into the issue, I set the maxmemory=256 and can confirm
that it took much longer to fail, but it did fail in the same way. I didn't
think of setting it smaller than the default, but I suspect you are correct.
I'll try it that way when I attempt to generate the stack trace
Hey Josh,
You can validate what Peter's suggesting by setting a small heap when you run
the server.
If you edit conf/neo4j-wrapper.conf you can override the property for heap size
with something like this:
wrapper.java.maxmemory=1
Then you should (in theory) be able to see the batch operation
The heap space stuff would make sense I think, because we currently
deserialize and serialize in-place, keeping the whole thing in memory. Would
be interesting to see if we could implement a setup that can stream the
deserialization/serialization, getting rid of the memory overhead..
You said you
Hey Peter,
I don't have any way of verifying on the server side, other than measuring
the time it takes for curl_exec to return a response. On the client side I
can see that PHP's json_encode/json_decode functions are taking less than
.5% of the total run time, even with a batch size of 1. Dur
Josh,
it might be that the parsing of the JSON load is taking up increasingly much
time when you get big batches. At least that is my suspicion. Also, that
might be the reason for the heap problems - basically the String parsing is
taking over :/
Do you have any means of verifying that?
Cheers,
Hey all,
I've been working on adding batch support to
http://github.com/jadell/Neo4jPHP Neo4jPHP . Here are the results of my
latest benchmarks. First column is the number of nodes being inserted,
second column is the average in seconds over 5 runs to insert that many
nodes in a single batch, t
10 matches
Mail list logo