Thank you, it helps.
Luda
> The natural way is with transactions so that you group a couple of
> thousand operations or so when you do a big batch insert. Let me do a
> very simple example by creating and indexing 100 000 nodes.
>
> NeoService neo = ...;
> IndexService index = ...;
>
> Transaction tx = neo.beginTx();
> try
> {
> for ( int i = 1; i <= 100000; i++ )
> {
> Node node = neo.createNode();
> String name = "The name of " + i;
> node.setProperty( "name", name );
> index.index( node, "name", name );
> if ( i % 5000 == 0 )
> {
> tx.success();
> tx.finish();
> tx = neo.begin();
> }
> }
> tx.success();
> }
> finally
> {
> tx.finish();
> }
>
>
> Here neo will only write to disk (commit) each 5000 nodes, thus disk
> I/O wont be a bottle neck.
>
> Was this something you were looking for?
>
>
> / Mattias
>
>
> 2009/3/17 Lyudmila Balakireva <[email protected]>:
>> Hello,
>> Is any options exists  to optimize loading to neo db?  I am using
>> native neo API and LuceneIndex service.
>> If any way to control frequency  of  writing  to disk ?
>> Thank you , Luda
>> _______________________________________________
>> Neo mailing list
>> [email protected]
>> https://lists.neo4j.org/mailman/listinfo/user
>>
>
>
>
> --
> Mattias Persson, [[email protected]]
> Neo Technology, www.neotechnology.com
> _______________________________________________
> Neo mailing list
> [email protected]
> https://lists.neo4j.org/mailman/listinfo/user
>
_______________________________________________
Neo mailing list
[email protected]
https://lists.neo4j.org/mailman/listinfo/user