Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-05-12 Thread Mattias Persson
It should be included in 2.2.2 when that gets released, also the next 2.3 
milestone (2.3-M02)

On Wednesday, May 6, 2015 at 2:19:30 PM UTC+2, Rita wrote:
>
> Thank you very much for the bug fix
>  https://github.com/neo4j/neo4j/pull/4509 
> 
> When will be possibile to have this solution into a new release?
>
> Rita
>
> Il giorno lunedì 20 aprile 2015 10:37:15 UTC+2, Rita ha scritto:
>>
>> Hi Mattias,
>> thank you for the reply. I opened this issue 
>> https://github.com/neo4j/neo4j/issues/ . So you confirm that the 
>> java heap space error is caused by a possible bug in index handling. In 
>> this case I'm going to wait for news about a fix to have a retry in future, 
>> I hope soon!
>> Thank you for your work!
>>
>> Rita
>>
>>
>> Il giorno lunedì 20 aprile 2015 09:25:18 UTC+2, Mattias Persson ha 
>> scritto:
>>>
>>> Hi, the IndexHits instance returned isn't putting everything in that set 
>>> right away, but over time to avoid returning duplicates when combining 
>>> transaction state and store state. Looking at it right now I see that this 
>>> can be made much better by returning hits from transaction state first, 
>>> putting _only_ those ids into that set and then comparing - but not adding 
>>> - when iterating returning ids from store.
>>>
>>> Let me see if I can get around fixing that soon...
>>>
>>> On Friday, April 17, 2015 at 3:27:21 PM UTC+2, Rita wrote:

 I am very sorry to point out that those Lucene queries actually have a 
 changeable behaviour, usually they are slower than the past, and in most 
 case I get that error again (using 6GB heap of 8GB total RAM).

 I am updating the graph with delete and update of nodes and 
 relationships. I decreased the number of operations per transaction but 
 most of times I still got this error.

 The insertion with Batch Inserter instead seems to be ok!

 What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
 could be a solution? The new version covers packages related to these 
 problems?


 Thanks in advance

 Rita

 Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:
>
> Thank you for the reply Michael. I have just published the issue.
> I was using as usual -Xmx4g. Now I've just tried with 6GB and  open 
> and close a single transaction for every query like that on the different 
> indexes and I do not get this exception.
> So now it has more need of memory for the same operation. I  try on 
> other cases. Tell me if there are news on the issue please.
> Thank you.
>
> Regards
> Rita
>
> Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha 
> scritto:
>>
>> This seems to be a like a bug.
>>
>> How much heap do you have? 
>>
>> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>>
>> Thanks so much
>>
>> Michael
>>
>> Am 16.04.2015 um 11:26 schrieb Rita :
>>
>> Hi all,
>> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode 
>> using java. I have inserted the transactions also for read operations, 
>> but 
>> now when I query my Lucene indexes as this
>>
>> rhits = index.query("cs", "*");
>> out.println("#" + rhits.size());
>> rhits.close();
>>
>>
>> as you can see I do not have to iterate over the result, I need only 
>> the number of results but this new Neo4j version looks like loading all 
>> in 
>> memory and I get the following error in the first instruction.
>>
>> Exception in thread "main" java.lang.OutOfMemoryError: Java heap 
>> space
>> at org.neo4j.collection.primitive.hopscotch.
>> IntArrayBasedKeyTable.initia
>> lizeTable(IntArrayBasedKeyTable.java:54)
>> at org.neo4j.collection.primitive.hopscotch.
>> IntArrayBasedKeyTable.
>> (IntArrayBasedKeyTable.java:48)
>> at org.neo4j.collection.primitive.hopscotch.LongKeyTable.<
>> init>(LongKeyT
>> able.java:27)
>> at org.neo4j.collection.primitive.Primitive.longSet(Primitive
>> .java:66)
>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
>> LegacyIndexPr
>> oxy.java:296)
>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.
>> wrapIndexHits(LegacyIn
>> dexProxy.java:294)
>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
>> LegacyIndexProxy
>> .java:352)
>>
>> I never get this with older versions of Neo4j! I always did this 
>> operation until version 1.9.9. 
>> Could you please help me to avoid this? Is it a bug of library 
>> implementation or I have to change the way of querying?
>>
>> Thanks in advance,
>> Rita
>>
>> -- 
>> You received this message because you are subscribed to the Google 
>> Groups "Neo4j" group.
>

Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-05-06 Thread Rita
Thank you very much for the bug fix https://github.com/neo4j/neo4j/pull/4509 

When will be possibile to have this solution into a new release?

Rita

Il giorno lunedì 20 aprile 2015 10:37:15 UTC+2, Rita ha scritto:
>
> Hi Mattias,
> thank you for the reply. I opened this issue 
> https://github.com/neo4j/neo4j/issues/ . So you confirm that the java 
> heap space error is caused by a possible bug in index handling. In this 
> case I'm going to wait for news about a fix to have a retry in future, I 
> hope soon!
> Thank you for your work!
>
> Rita
>
>
> Il giorno lunedì 20 aprile 2015 09:25:18 UTC+2, Mattias Persson ha scritto:
>>
>> Hi, the IndexHits instance returned isn't putting everything in that set 
>> right away, but over time to avoid returning duplicates when combining 
>> transaction state and store state. Looking at it right now I see that this 
>> can be made much better by returning hits from transaction state first, 
>> putting _only_ those ids into that set and then comparing - but not adding 
>> - when iterating returning ids from store.
>>
>> Let me see if I can get around fixing that soon...
>>
>> On Friday, April 17, 2015 at 3:27:21 PM UTC+2, Rita wrote:
>>>
>>> I am very sorry to point out that those Lucene queries actually have a 
>>> changeable behaviour, usually they are slower than the past, and in most 
>>> case I get that error again (using 6GB heap of 8GB total RAM).
>>>
>>> I am updating the graph with delete and update of nodes and 
>>> relationships. I decreased the number of operations per transaction but 
>>> most of times I still got this error.
>>>
>>> The insertion with Batch Inserter instead seems to be ok!
>>>
>>> What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
>>> could be a solution? The new version covers packages related to these 
>>> problems?
>>>
>>>
>>> Thanks in advance
>>>
>>> Rita
>>>
>>> Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:

 Thank you for the reply Michael. I have just published the issue.
 I was using as usual -Xmx4g. Now I've just tried with 6GB and  open and 
 close a single transaction for every query like that on the different 
 indexes and I do not get this exception.
 So now it has more need of memory for the same operation. I  try on 
 other cases. Tell me if there are news on the issue please.
 Thank you.

 Regards
 Rita

 Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha 
 scritto:
>
> This seems to be a like a bug.
>
> How much heap do you have? 
>
> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>
> Thanks so much
>
> Michael
>
> Am 16.04.2015 um 11:26 schrieb Rita :
>
> Hi all,
> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using 
> java. I have inserted the transactions also for read operations, but now 
> when I query my Lucene indexes as this
>
> rhits = index.query("cs", "*");
> out.println("#" + rhits.size());
> rhits.close();
>
>
> as you can see I do not have to iterate over the result, I need only 
> the number of results but this new Neo4j version looks like loading all 
> in 
> memory and I get the following error in the first instruction.
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at org.neo4j.collection.primitive.hopscotch.
> IntArrayBasedKeyTable.initia
> lizeTable(IntArrayBasedKeyTable.java:54)
> at org.neo4j.collection.primitive.hopscotch.
> IntArrayBasedKeyTable.
> (IntArrayBasedKeyTable.java:48)
> at org.neo4j.collection.primitive.hopscotch.LongKeyTable. >(LongKeyT
> able.java:27)
> at org.neo4j.collection.primitive.Primitive.longSet(Primitive.
> java:66)
> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
> LegacyIndexPr
> oxy.java:296)
> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.
> wrapIndexHits(LegacyIn
> dexProxy.java:294)
> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
> LegacyIndexProxy
> .java:352)
>
> I never get this with older versions of Neo4j! I always did this 
> operation until version 1.9.9. 
> Could you please help me to avoid this? Is it a bug of library 
> implementation or I have to change the way of querying?
>
> Thanks in advance,
> Rita
>
> -- 
> You received this message because you are subscribed to the Google 
> Groups "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to neo4j+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from

Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-04-20 Thread Rita
Hi Mattias,
thank you for the reply. I opened this issue 
https://github.com/neo4j/neo4j/issues/ . So you confirm that the java 
heap space error is caused by a possible bug in index handling. In this 
case I'm going to wait for news about a fix to have a retry in future, I 
hope soon!
Thank you for your work!

Rita


Il giorno lunedì 20 aprile 2015 09:25:18 UTC+2, Mattias Persson ha scritto:
>
> Hi, the IndexHits instance returned isn't putting everything in that set 
> right away, but over time to avoid returning duplicates when combining 
> transaction state and store state. Looking at it right now I see that this 
> can be made much better by returning hits from transaction state first, 
> putting _only_ those ids into that set and then comparing - but not adding 
> - when iterating returning ids from store.
>
> Let me see if I can get around fixing that soon...
>
> On Friday, April 17, 2015 at 3:27:21 PM UTC+2, Rita wrote:
>>
>> I am very sorry to point out that those Lucene queries actually have a 
>> changeable behaviour, usually they are slower than the past, and in most 
>> case I get that error again (using 6GB heap of 8GB total RAM).
>>
>> I am updating the graph with delete and update of nodes and 
>> relationships. I decreased the number of operations per transaction but 
>> most of times I still got this error.
>>
>> The insertion with Batch Inserter instead seems to be ok!
>>
>> What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
>> could be a solution? The new version covers packages related to these 
>> problems?
>>
>>
>> Thanks in advance
>>
>> Rita
>>
>> Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:
>>>
>>> Thank you for the reply Michael. I have just published the issue.
>>> I was using as usual -Xmx4g. Now I've just tried with 6GB and  open and 
>>> close a single transaction for every query like that on the different 
>>> indexes and I do not get this exception.
>>> So now it has more need of memory for the same operation. I  try on 
>>> other cases. Tell me if there are news on the issue please.
>>> Thank you.
>>>
>>> Regards
>>> Rita
>>>
>>> Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha 
>>> scritto:

 This seems to be a like a bug.

 How much heap do you have? 

 Could you raise an issue on github.com/neo4j/neo4j/issues ?

 Thanks so much

 Michael

 Am 16.04.2015 um 11:26 schrieb Rita :

 Hi all,
 I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using 
 java. I have inserted the transactions also for read operations, but now 
 when I query my Lucene indexes as this

 rhits = index.query("cs", "*");
 out.println("#" + rhits.size());
 rhits.close();


 as you can see I do not have to iterate over the result, I need only 
 the number of results but this new Neo4j version looks like loading all in 
 memory and I get the following error in the first instruction.

 Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
 at org.neo4j.collection.primitive.hopscotch.
 IntArrayBasedKeyTable.initia
 lizeTable(IntArrayBasedKeyTable.java:54)
 at org.neo4j.collection.primitive.hopscotch.
 IntArrayBasedKeyTable.
 (IntArrayBasedKeyTable.java:48)
 at org.neo4j.collection.primitive.hopscotch.LongKeyTable.>>> >(LongKeyT
 able.java:27)
 at org.neo4j.collection.primitive.Primitive.longSet(Primitive.
 java:66)
 at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
 LegacyIndexPr
 oxy.java:296)
 at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.wrapIndexHits
 (LegacyIn
 dexProxy.java:294)
 at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
 LegacyIndexProxy
 .java:352)

 I never get this with older versions of Neo4j! I always did this 
 operation until version 1.9.9. 
 Could you please help me to avoid this? Is it a bug of library 
 implementation or I have to change the way of querying?

 Thanks in advance,
 Rita

 -- 
 You received this message because you are subscribed to the Google 
 Groups "Neo4j" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to neo4j+un...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-04-20 Thread Mattias Persson
Hi, the IndexHits instance returned isn't putting everything in that set 
right away, but over time to avoid returning duplicates when combining 
transaction state and store state. Looking at it right now I see that this 
can be made much better by returning hits from transaction state first, 
putting _only_ those ids into that set and then comparing - but not adding 
- when iterating returning ids from store.

Let me see if I can get around fixing that soon...

On Friday, April 17, 2015 at 3:27:21 PM UTC+2, Rita wrote:
>
> I am very sorry to point out that those Lucene queries actually have a 
> changeable behaviour, usually they are slower than the past, and in most 
> case I get that error again (using 6GB heap of 8GB total RAM).
>
> I am updating the graph with delete and update of nodes and relationships. 
> I decreased the number of operations per transaction but most of times I 
> still got this error.
>
> The insertion with Batch Inserter instead seems to be ok!
>
> What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
> could be a solution? The new version covers packages related to these 
> problems?
>
>
> Thanks in advance
>
> Rita
>
> Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:
>>
>> Thank you for the reply Michael. I have just published the issue.
>> I was using as usual -Xmx4g. Now I've just tried with 6GB and  open and 
>> close a single transaction for every query like that on the different 
>> indexes and I do not get this exception.
>> So now it has more need of memory for the same operation. I  try on other 
>> cases. Tell me if there are news on the issue please.
>> Thank you.
>>
>> Regards
>> Rita
>>
>> Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha 
>> scritto:
>>>
>>> This seems to be a like a bug.
>>>
>>> How much heap do you have? 
>>>
>>> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>>>
>>> Thanks so much
>>>
>>> Michael
>>>
>>> Am 16.04.2015 um 11:26 schrieb Rita :
>>>
>>> Hi all,
>>> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using 
>>> java. I have inserted the transactions also for read operations, but now 
>>> when I query my Lucene indexes as this
>>>
>>> rhits = index.query("cs", "*");
>>> out.println("#" + rhits.size());
>>> rhits.close();
>>>
>>>
>>> as you can see I do not have to iterate over the result, I need only the 
>>> number of results but this new Neo4j version looks like loading all in 
>>> memory and I get the following error in the first instruction.
>>>
>>> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>>> at org.neo4j.collection.primitive.hopscotch.
>>> IntArrayBasedKeyTable.initia
>>> lizeTable(IntArrayBasedKeyTable.java:54)
>>> at org.neo4j.collection.primitive.hopscotch.
>>> IntArrayBasedKeyTable.
>>> (IntArrayBasedKeyTable.java:48)
>>> at org.neo4j.collection.primitive.hopscotch.LongKeyTable.(
>>> LongKeyT
>>> able.java:27)
>>> at org.neo4j.collection.primitive.Primitive.longSet(Primitive.
>>> java:66)
>>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
>>> LegacyIndexPr
>>> oxy.java:296)
>>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.wrapIndexHits(
>>> LegacyIn
>>> dexProxy.java:294)
>>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
>>> LegacyIndexProxy
>>> .java:352)
>>>
>>> I never get this with older versions of Neo4j! I always did this 
>>> operation until version 1.9.9. 
>>> Could you please help me to avoid this? Is it a bug of library 
>>> implementation or I have to change the way of querying?
>>>
>>> Thanks in advance,
>>> Rita
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Neo4j" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to neo4j+un...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-04-17 Thread Rita


I am very sorry to point out that those Lucene queries actually have a 
changeable behaviour, usually they are slower than the past, and in most 
case I get that error again (using 6GB heap of 8GB total RAM).

I am updating the graph with delete and update of nodes and relationships. 
I decreased the number of operations per transaction but most of times I 
still got this error.

The insertion with Batch Inserter instead seems to be ok!

What do you suggest me please?  Keep  1.9.9 version or upgrade to 2.2.1 
could be a solution? The new version covers packages related to these 
problems?


Thanks in advance

Rita

Il giorno giovedì 16 aprile 2015 14:22:34 UTC+2, Rita ha scritto:
>
> Thank you for the reply Michael. I have just published the issue.
> I was using as usual -Xmx4g. Now I've just tried with 6GB and  open and 
> close a single transaction for every query like that on the different 
> indexes and I do not get this exception.
> So now it has more need of memory for the same operation. I  try on other 
> cases. Tell me if there are news on the issue please.
> Thank you.
>
> Regards
> Rita
>
> Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha scritto:
>>
>> This seems to be a like a bug.
>>
>> How much heap do you have? 
>>
>> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>>
>> Thanks so much
>>
>> Michael
>>
>> Am 16.04.2015 um 11:26 schrieb Rita :
>>
>> Hi all,
>> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using 
>> java. I have inserted the transactions also for read operations, but now 
>> when I query my Lucene indexes as this
>>
>> rhits = index.query("cs", "*");
>> out.println("#" + rhits.size());
>> rhits.close();
>>
>>
>> as you can see I do not have to iterate over the result, I need only the 
>> number of results but this new Neo4j version looks like loading all in 
>> memory and I get the following error in the first instruction.
>>
>> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>> at org.neo4j.collection.primitive.hopscotch.IntArrayBasedKeyTable
>> .initia
>> lizeTable(IntArrayBasedKeyTable.java:54)
>> at org.neo4j.collection.primitive.hopscotch.IntArrayBasedKeyTable
>> .
>> (IntArrayBasedKeyTable.java:48)
>> at org.neo4j.collection.primitive.hopscotch.LongKeyTable.(
>> LongKeyT
>> able.java:27)
>> at org.neo4j.collection.primitive.Primitive.longSet(Primitive.
>> java:66)
>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
>> LegacyIndexPr
>> oxy.java:296)
>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.wrapIndexHits(
>> LegacyIn
>> dexProxy.java:294)
>> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
>> LegacyIndexProxy
>> .java:352)
>>
>> I never get this with older versions of Neo4j! I always did this 
>> operation until version 1.9.9. 
>> Could you please help me to avoid this? Is it a bug of library 
>> implementation or I have to change the way of querying?
>>
>> Thanks in advance,
>> Rita
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Neo4j" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to neo4j+un...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-04-16 Thread Rita
Thank you for the reply Michael. I have just published the issue.
I was using as usual -Xmx4g. Now I've just tried with 6GB and  open and 
close a single transaction for every query like that on the different 
indexes and I do not get this exception.
So now it has more need of memory for the same operation. I  try on other 
cases. Tell me if there are news on the issue please.
Thank you.

Regards
Rita

Il giorno giovedì 16 aprile 2015 13:04:01 UTC+2, Michael Hunger ha scritto:
>
> This seems to be a like a bug.
>
> How much heap do you have? 
>
> Could you raise an issue on github.com/neo4j/neo4j/issues ?
>
> Thanks so much
>
> Michael
>
> Am 16.04.2015 um 11:26 schrieb Rita >:
>
> Hi all,
> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using 
> java. I have inserted the transactions also for read operations, but now 
> when I query my Lucene indexes as this
>
> rhits = index.query("cs", "*");
> out.println("#" + rhits.size());
> rhits.close();
>
>
> as you can see I do not have to iterate over the result, I need only the 
> number of results but this new Neo4j version looks like loading all in 
> memory and I get the following error in the first instruction.
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at org.neo4j.collection.primitive.hopscotch.IntArrayBasedKeyTable.
> initia
> lizeTable(IntArrayBasedKeyTable.java:54)
> at org.neo4j.collection.primitive.hopscotch.IntArrayBasedKeyTable
> .
> (IntArrayBasedKeyTable.java:48)
> at org.neo4j.collection.primitive.hopscotch.LongKeyTable.(
> LongKeyT
> able.java:27)
> at org.neo4j.collection.primitive.Primitive.longSet(Primitive.java
> :66)
> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(
> LegacyIndexPr
> oxy.java:296)
> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.wrapIndexHits(
> LegacyIn
> dexProxy.java:294)
> at org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(
> LegacyIndexProxy
> .java:352)
>
> I never get this with older versions of Neo4j! I always did this operation 
> until version 1.9.9. 
> Could you please help me to avoid this? Is it a bug of library 
> implementation or I have to change the way of querying?
>
> Thanks in advance,
> Rita
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Neo4j] from Neo4j1.9.9 to Neo4j 2.2.0 exception querying Lucene

2015-04-16 Thread Michael Hunger
This seems to be a like a bug.

How much heap do you have? 

Could you raise an issue on github.com/neo4j/neo4j/issues 
 ?

Thanks so much

Michael

> Am 16.04.2015 um 11:26 schrieb Rita :
> 
> Hi all,
> I am passing from Neo4j 1.9.9 to Neo4j 2.2.0, with embedded mode using java. 
> I have inserted the transactions also for read operations, but now when I 
> query my Lucene indexes as this
> 
> rhits = index.query("cs", "*");
> out.println("#" + rhits.size());
> rhits.close();
> 
> 
> as you can see I do not have to iterate over the result, I need only the 
> number of results but this new Neo4j version looks like loading all in memory 
> and I get the following error in the first instruction.
> 
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.neo4j.collection.primitive.hopscotch.IntArrayBasedKeyTable.initia
> lizeTable(IntArrayBasedKeyTable.java:54)
> at 
> org.neo4j.collection.primitive.hopscotch.IntArrayBasedKeyTable.
> (IntArrayBasedKeyTable.java:48)
> at 
> org.neo4j.collection.primitive.hopscotch.LongKeyTable.(LongKeyT
> able.java:27)
> at org.neo4j.collection.primitive.Primitive.longSet(Primitive.java:66)
> at 
> org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1.(LegacyIndexPr
> oxy.java:296)
> at 
> org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.wrapIndexHits(LegacyIn
> dexProxy.java:294)
> at 
> org.neo4j.kernel.impl.coreapi.LegacyIndexProxy.query(LegacyIndexProxy
> .java:352)
> 
> I never get this with older versions of Neo4j! I always did this operation 
> until version 1.9.9. 
> Could you please help me to avoid this? Is it a bug of library implementation 
> or I have to change the way of querying?
> 
> Thanks in advance,
> Rita
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to neo4j+unsubscr...@googlegroups.com 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neo4j+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.