On Thu, Feb 17, 2011 at 12:54 PM, Pablo Pareja <ppar...@era7.com> wrote:

> Hi Massimo,
>
> It's too bad you are running into the same kind of situations, (specially
> when
> the conclusion you came up to is that Neo4j just degrades...).
> However, did you already try dividing "the" big insertion process into
> smaller
> steps?

Well I do "big transactions" since the BatchInserter (from the wiki)
is not an option for me and I'm doing 10000 insert per transaction but
as soon as the db grows performance drops inexorably.

Here it a summary of the latest results which store nodes with only
one String (IPv4 address) property each, it starts fom taking 1.05ms
to insert a Node within a db with 440744 nodes and it end taking
8.75ms to insert a Node within a db with 12545155 nodes.

The final DB size is: 2.9G since i tweaked the sintrg_block_size at
graphdb creation time to 60bytes instead of 120...

If anyone is interested I could provide the complete table of progression...

> I mean, do you think Neo4j degradation is just proportional to DB size ?

It seems or at least to the Node's number but that is understandable,
what make me think is that perfomance are so bad that they compromise
usability, but I understand I could do something wrong.

> or rather just to the amount of data being inserted in the same Batch
> Insertion?

As I said I use Big transaction pattern from the wiki not the Batch insert

If anyone is interested I could provide more data... let me know, I
hope to be able to use neo4j for this kind of work.
-- 
Massimo
http://meridio.blogspot.com
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to