Re: [Neo] Troubleshooting performance/memory issues

2009-12-10 Thread Johan Svensson
Hi,

The logical log stores all transactions and will rotate itself at a
configurable value (default 10MB). Once the log is rotated data will
be flushed to the store files so we only have to perform recovery on
the "latest" log file in case of a crash. Any running transaction will
be copied from the log to the new log during rotate.

A transaction commit will result in a flush to the logical log file
(and global transaction log) if the transaction is a "write
transaction" (read only transactions do not touch the log files).

You mentioned a 10 times performance increase going from chunk size 1k
to 10k nodes and that is quite a lot. I could understand if you went
from 100 to 10k (usually we max out write performance somewhere around
20k-100k write operations/transaction but it depends on hardware).
What kind of system is this running on (OS, file system ext4?,
hardware)?

Regards,
-Johan

On Wed, Dec 9, 2009 at 9:59 PM, Rick Bullotta
 wrote:
> FYI, we experimented with different heap size (1GB), along with different
> "chunk sizes", and were able to eliminate the heap error and get about a 10X
> improvement in insert speed.  It would be helpful to better understand the
> interactions of the various Neo startup parameters, transaction buffers, and
> so on, and their impact on performance.  I read the performance guidelines,
> which was some help, but perhaps some additional scenario-based
> recommendations might help (frequent updates/frequent access, infrequent
> update/frequent access, burst mode update vs steady update rate, etc...).
>
> Learning more about Neo every hour!
>
> -Original Message-
> From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
> Behalf Of Rick Bullotta
> Sent: Wednesday, December 09, 2009 2:57 PM
> To: 'Neo user discussions'
> Subject: [Neo] Troubleshooting performance/memory issues
>
> Hi, all.
>
>
>
> When trying to load a few hundred thousand nodes & relationships (chunking
> it in groups of 1000 nodes or so), we are getting an out of memory heap
> error after 15-20 minutes or so.  No big deal, we expanded the heap settings
> for the JVM.  But then we also noticed that the nioneo_logical_log.xxx file
> was continuing to grow, even though we were wrapping each 1000 node inserts
> in their own transaction (there is no other transaction active) and
> committing w/success and finishing each group of 1000.    Periodically
> (seemingly unrelated to our transaction finishing), that file shrinks again
> and the data is flushed to the other neo propertystore and relationshipstore
> files.  I just wanted to check if that was normal behavior, or if there is
> something wrong with way we (or Neo) is handling the transactions, and thus
> the reason we hit an out-of-memory error.
>
>
>
> Thanks,
>
>
>
> Rick
___
Neo mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo] Troubleshooting performance/memory issues

2009-12-09 Thread Rick Bullotta
FYI, we experimented with different heap size (1GB), along with different
"chunk sizes", and were able to eliminate the heap error and get about a 10X
improvement in insert speed.  It would be helpful to better understand the
interactions of the various Neo startup parameters, transaction buffers, and
so on, and their impact on performance.  I read the performance guidelines,
which was some help, but perhaps some additional scenario-based
recommendations might help (frequent updates/frequent access, infrequent
update/frequent access, burst mode update vs steady update rate, etc...).  

Learning more about Neo every hour!

-Original Message-
From: user-boun...@lists.neo4j.org [mailto:user-boun...@lists.neo4j.org] On
Behalf Of Rick Bullotta
Sent: Wednesday, December 09, 2009 2:57 PM
To: 'Neo user discussions'
Subject: [Neo] Troubleshooting performance/memory issues

Hi, all.

 

When trying to load a few hundred thousand nodes & relationships (chunking
it in groups of 1000 nodes or so), we are getting an out of memory heap
error after 15-20 minutes or so.  No big deal, we expanded the heap settings
for the JVM.  But then we also noticed that the nioneo_logical_log.xxx file
was continuing to grow, even though we were wrapping each 1000 node inserts
in their own transaction (there is no other transaction active) and
committing w/success and finishing each group of 1000.Periodically
(seemingly unrelated to our transaction finishing), that file shrinks again
and the data is flushed to the other neo propertystore and relationshipstore
files.  I just wanted to check if that was normal behavior, or if there is
something wrong with way we (or Neo) is handling the transactions, and thus
the reason we hit an out-of-memory error.

 

Thanks,

 

Rick

 

___
Neo mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

___
Neo mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo] Troubleshooting performance/memory issues

2009-12-09 Thread Rick Bullotta
Hi, all.

 

When trying to load a few hundred thousand nodes & relationships (chunking
it in groups of 1000 nodes or so), we are getting an out of memory heap
error after 15-20 minutes or so.  No big deal, we expanded the heap settings
for the JVM.  But then we also noticed that the nioneo_logical_log.xxx file
was continuing to grow, even though we were wrapping each 1000 node inserts
in their own transaction (there is no other transaction active) and
committing w/success and finishing each group of 1000.Periodically
(seemingly unrelated to our transaction finishing), that file shrinks again
and the data is flushed to the other neo propertystore and relationshipstore
files.  I just wanted to check if that was normal behavior, or if there is
something wrong with way we (or Neo) is handling the transactions, and thus
the reason we hit an out-of-memory error.

 

Thanks,

 

Rick

 

___
Neo mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user