[jira] [Updated] (LUCENE-7200) stop word/punctuation QueryParser error(return all records)

2016-04-11 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-7200:
---
Priority: Minor  (was: Major)

> stop word/punctuation QueryParser error(return all records)
> ---
>
> Key: LUCENE-7200
> URL: https://issues.apache.org/jira/browse/LUCENE-7200
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.4, 5.4.1
>Reporter: Littlestar
>Priority: Minor
>
> when user input some stop words or punctuation.
> It return all records??
> {code}
>   @Test
> public void test01() throws ParseException {
> QueryParser parser = new QueryParser("test", new CJKAnalyzer());
> Query parse = parser.parse("test:hello AND (a)");
> //Query parse = parser.parse("test:hello AND (;)");
> System.out.println(parse.toString());
> }
> {code}
> Does "test:hello AND (a)"  === "test:hello"
> thanks?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7200) stop word/punctuation QueryParser error(return all records)

2016-04-11 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-7200:
---
Description: 
when user input some stop words or punctuation.
It return all records??
{code}
  @Test
public void test01() throws ParseException {
QueryParser parser = new QueryParser("test", new CJKAnalyzer());
Query parse = parser.parse("test:hello AND (a)");
//Query parse = parser.parse("test:hello AND (;)");
System.out.println(parse.toString());
}
{code}

Does "test:hello AND (a)"  === "test:hello"

thanks?

  was:
when user input some stop words or punctuation.
It return all records??

  @Test
public void test01() throws ParseException {
QueryParser parser = new QueryParser("test", new CJKAnalyzer());
Query parse = parser.parse("test:hello AND (a)");
//Query parse = parser.parse("test:hello AND (;)");
System.out.println(parse.toString());
}

Does "test:hello AND (a)"  === "test:hello"

thanks?


> stop word/punctuation QueryParser error(return all records)
> ---
>
> Key: LUCENE-7200
> URL: https://issues.apache.org/jira/browse/LUCENE-7200
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 4.10.4, 5.4.1
>Reporter: Littlestar
>
> when user input some stop words or punctuation.
> It return all records??
> {code}
>   @Test
> public void test01() throws ParseException {
> QueryParser parser = new QueryParser("test", new CJKAnalyzer());
> Query parse = parser.parse("test:hello AND (a)");
> //Query parse = parser.parse("test:hello AND (;)");
> System.out.println(parse.toString());
> }
> {code}
> Does "test:hello AND (a)"  === "test:hello"
> thanks?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7200) stop word/punctuation QueryParser error(return all records)

2016-04-10 Thread Littlestar (JIRA)
Littlestar created LUCENE-7200:
--

 Summary: stop word/punctuation QueryParser error(return all 
records)
 Key: LUCENE-7200
 URL: https://issues.apache.org/jira/browse/LUCENE-7200
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 5.4.1, 4.10.4
Reporter: Littlestar


when user input some stop words or punctuation.
It return all records??

  @Test
public void test01() throws ParseException {
QueryParser parser = new QueryParser("test", new CJKAnalyzer());
Query parse = parser.parse("test:hello AND (a)");
//Query parse = parser.parse("test:hello AND (;)");
System.out.println(parse.toString());
}

Does "test:hello AND (a)"  === "test:hello"

thanks?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar edited comment on LUCENE-6170 at 1/13/15 3:46 AM:
-

I shared the result of MultiDocValues.getBinaryValues | getSortedValues  in 
some threads, it throw the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok 
but very slow.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 


was (Author: cnstar9988):
I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok 
but very slow.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar edited comment on LUCENE-6170 at 1/13/15 3:46 AM:
-

I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok 
but very slow.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 


was (Author: cnstar9988):
I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar edited comment on LUCENE-6170 at 1/13/15 3:45 AM:
-

I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 


was (Author: cnstar9988):
I shared the MultiDocValues.getBinaryValues in some threads, it throw the 
problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar commented on LUCENE-6170:


I shared the MultiDocValues.getBinaryValues in some threads, it throw the 
problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271080#comment-14271080
 ] 

Littlestar commented on LUCENE-6170:


java version "1.7.0_60" + G1GC.
G1GC or default CMS has same problem.




> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-09 Thread Littlestar (JIRA)
Littlestar created LUCENE-6170:
--

 Summary: MultiDocValues.getSortedValues cause 
IndexOutOfBoundsException
 Key: LUCENE-6170
 URL: https://issues.apache.org/jira/browse/LUCENE-6170
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.10.1
Reporter: Littlestar


Caused by: java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkBounds(Buffer.java:567)
at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
at 
org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
at 
org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
at 
org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
at 
org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
at 
org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
at 
org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
at 
org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5928:
---
Environment: SSD 1.5T, RAM 256G, records 15*1*1  (was: SSD 1.5T, 
RAM 256G 10*1)

> WildcardQuery may has memory leak
> -
>
> Key: LUCENE-5928
> URL: https://issues.apache.org/jira/browse/LUCENE-5928
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: SSD 1.5T, RAM 256G, records 15*1*1
>Reporter: Littlestar
>Assignee: Uwe Schindler
> Attachments: top_java.jpg
>
>
> data 800G, records 15*1*1.
> one search thread.
> content:???
> content:*
> content:*1
> content:*2
> content:*3
> jvm heap=96G, but the jvm memusage over 130g?
> run more wildcard, use memory more
> Does luence search/index use a lot of DirectMemory or Native Memory?
> I use -XX:MaxDirectMemorySize=4g, it does nothing better.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14127909#comment-14127909
 ] 

Littlestar edited comment on LUCENE-5928 at 9/10/14 1:45 AM:
-

hi, 
   when using default MMapDirectory,  jvm heap=96G,  the java process RES over 
130g, not VIRT(VIRT >=900G).

see attachement, thanks.
!top_java.jpg!


was (Author: cnstar9988):
hi, 
   when using default MMapDirectory,  jvm heap=96G,  the java process RES over 
130g, not VIRT(VIRT >=900G).

see attachement, thanks.


> WildcardQuery may has memory leak
> -
>
> Key: LUCENE-5928
> URL: https://issues.apache.org/jira/browse/LUCENE-5928
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: SSD 1.5T, RAM 256G 10*1
>Reporter: Littlestar
>Assignee: Uwe Schindler
> Attachments: top_java.jpg
>
>
> data 800G, records 15*1*1.
> one search thread.
> content:???
> content:*
> content:*1
> content:*2
> content:*3
> jvm heap=96G, but the jvm memusage over 130g?
> run more wildcard, use memory more
> Does luence search/index use a lot of DirectMemory or Native Memory?
> I use -XX:MaxDirectMemorySize=4g, it does nothing better.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5928:
---
Attachment: top_java.jpg

hi, 
   when using default MMapDirectory,  jvm heap=96G,  the java process RES over 
130g, not VIRT(VIRT >=900G).

see attachement, thanks.


> WildcardQuery may has memory leak
> -
>
> Key: LUCENE-5928
> URL: https://issues.apache.org/jira/browse/LUCENE-5928
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: SSD 1.5T, RAM 256G 10*1
>Reporter: Littlestar
>Assignee: Uwe Schindler
> Attachments: top_java.jpg
>
>
> data 800G, records 15*1*1.
> one search thread.
> content:???
> content:*
> content:*1
> content:*2
> content:*3
> jvm heap=96G, but the jvm memusage over 130g?
> run more wildcard, use memory more
> Does luence search/index use a lot of DirectMemory or Native Memory?
> I use -XX:MaxDirectMemorySize=4g, it does nothing better.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14126800#comment-14126800
 ] 

Littlestar commented on LUCENE-5928:


when I changed to NIO, It works OK.
Does MMAP use a lot of NativeMemory?

> WildcardQuery may has memory leak
> -
>
> Key: LUCENE-5928
> URL: https://issues.apache.org/jira/browse/LUCENE-5928
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: SSD 1.5T, RAM 256G 10*1
>Reporter: Littlestar
>
> data 800G, records 15*1*1.
> one search thread.
> content:???
> content:*
> content:*1
> content:*2
> content:*3
> jvm heap=96G, but the jvm memusage over 130g?
> run more wildcard, use memory more
> Does luence search/index use a lot of DirectMemory or Native Memory?
> I use -XX:MaxDirectMemorySize=4g, it does nothing better.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5928) WildcardQuery may has memory leak

2014-09-09 Thread Littlestar (JIRA)
Littlestar created LUCENE-5928:
--

 Summary: WildcardQuery may has memory leak
 Key: LUCENE-5928
 URL: https://issues.apache.org/jira/browse/LUCENE-5928
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: SSD 1.5T, RAM 256G 10*1
Reporter: Littlestar


data 800G, records 15*1*1.
one search thread.
content:???
content:*
content:*1
content:*2
content:*3

jvm heap=96G, but the jvm memusage over 130g?
run more wildcard, use memory more

Does luence search/index use a lot of DirectMemory or Native Memory?
I use -XX:MaxDirectMemorySize=4g, it does nothing better.


Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5917) complex Query cause luene outMemory

2014-09-01 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5917:
---
Description: 
RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
350 or DOC_COUNT_PERCENT >=0.1.
It use a lots of memory when maxDoc very large.

MultiTermQueryWrapperFilter extends Filter

a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
outMemory, but there is no ways to prevent it.

another thing, some complex query also use a lot of memory too..


I think query implements Accountable(#ramSizeInBytes, #ramBytesUsed), users can 
throw a exception better than OutOfMemory.

  was:
RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
350 or DOC_COUNT_PERCENT >=0.1.
It use a lots of memory when maxDoc very large.

MultiTermQueryWrapperFilter extends Filter

a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
outMemory, but there is no ways to prevent it.

another thing, some complex query also use a lot of memory too..


I think query implements Accountable(#ramSizeInBytes), users can throw a 
exception better than OutOfMemory.


> complex Query cause luene outMemory
> ---
>
> Key: LUCENE-5917
> URL: https://issues.apache.org/jira/browse/LUCENE-5917
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: 256G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
> java heap=128G, G1 GC
>Reporter: Littlestar
>Priority: Minor
>
> RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
> 350 or DOC_COUNT_PERCENT >=0.1.
> It use a lots of memory when maxDoc very large.
> MultiTermQueryWrapperFilter extends Filter
> a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
> outMemory, but there is no ways to prevent it.
> another thing, some complex query also use a lot of memory too..
> I think query implements Accountable(#ramSizeInBytes, #ramBytesUsed), users 
> can throw a exception better than OutOfMemory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5917) complex Query cause luene outMemory

2014-09-01 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5917:
---
Environment: 
256G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
java heap=128G, G1 GC


  was:
256G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
java heap=128G, G1



> complex Query cause luene outMemory
> ---
>
> Key: LUCENE-5917
> URL: https://issues.apache.org/jira/browse/LUCENE-5917
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: 256G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
> java heap=128G, G1 GC
>Reporter: Littlestar
>Priority: Minor
>
> RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
> 350 or DOC_COUNT_PERCENT >=0.1.
> It use a lots of memory when maxDoc very large.
> MultiTermQueryWrapperFilter extends Filter
> a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
> outMemory, but there is no ways to prevent it.
> another thing, some complex query also use a lot of memory too..
> I think query implements Accountable(#ramSizeInBytes), users can throw a 
> exception better than OutOfMemory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5917) complex Query cause luene outMemory

2014-09-01 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5917:
---
Environment: 
256G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
java heap=128G, G1


  was:
128G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).



> complex Query cause luene outMemory
> ---
>
> Key: LUCENE-5917
> URL: https://issues.apache.org/jira/browse/LUCENE-5917
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: 256G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
> java heap=128G, G1
>Reporter: Littlestar
>Priority: Minor
>
> RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
> 350 or DOC_COUNT_PERCENT >=0.1.
> It use a lots of memory when maxDoc very large.
> MultiTermQueryWrapperFilter extends Filter
> a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
> outMemory, but there is no ways to prevent it.
> another thing, some complex query also use a lot of memory too..
> I think query implements Accountable(#ramSizeInBytes), users can throw a 
> exception better than OutOfMemory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5917) complex Query cause luene outMemory

2014-09-01 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14117852#comment-14117852
 ] 

Littlestar commented on LUCENE-5917:


{code}
MultiTermQueryWrapperFilter.java
...
@Override
  public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) 
throws IOException {
final AtomicReader reader = context.reader();
final Fields fields = reader.fields();
if (fields == null) {
  // reader has no fields
  return null;
}

final Terms terms = fields.terms(query.field);
if (terms == null) {
  // field does not exist
  return null;
}

final TermsEnum termsEnum = query.getTermsEnum(terms);
assert termsEnum != null;
if (termsEnum.next() != null) {
  // fill into a FixedBitSet
  final FixedBitSet bitSet = new FixedBitSet(context.reader().maxDoc()); 
//==here..
  DocsEnum docsEnum = null;
  do {
// System.out.println("  iter termCount=" + termCount + " term=" +
// enumerator.term().toBytesString());
docsEnum = termsEnum.docs(acceptDocs, docsEnum, DocsEnum.FLAG_NONE);
int docid;
while ((docid = docsEnum.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
  bitSet.set(docid);
}
  } while (termsEnum.next() != null);
  // System.out.println("  done termCount=" + termCount);

  return bitSet;
} else {
  return null;
}
  }
{code}


> complex Query cause luene outMemory
> ---
>
> Key: LUCENE-5917
> URL: https://issues.apache.org/jira/browse/LUCENE-5917
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.9
> Environment: 128G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).
>Reporter: Littlestar
>Priority: Minor
>
> RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
> 350 or DOC_COUNT_PERCENT >=0.1.
> It use a lots of memory when maxDoc very large.
> MultiTermQueryWrapperFilter extends Filter
> a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
> outMemory, but there is no ways to prevent it.
> another thing, some complex query also use a lot of memory too..
> I think query implements Accountable(#ramSizeInBytes), users can throw a 
> exception better than OutOfMemory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5917) complex Query cause luene outMemory

2014-09-01 Thread Littlestar (JIRA)
Littlestar created LUCENE-5917:
--

 Summary: complex Query cause luene outMemory
 Key: LUCENE-5917
 URL: https://issues.apache.org/jira/browse/LUCENE-5917
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.9
 Environment: 128G RAM  + 1.5T SSD + 1.2T DATA(10*1*1 records).

Reporter: Littlestar
Priority: Minor


RangeQuery, prefixQuery and WildcardQuery use FixedBitSet when TERM_COUNT >= 
350 or DOC_COUNT_PERCENT >=0.1.
It use a lots of memory when maxDoc very large.

MultiTermQueryWrapperFilter extends Filter

a little threads run with query "a* OR b* OR c*.OR z*“ will cause luene 
outMemory, but there is no ways to prevent it.

another thing, some complex query also use a lot of memory too..


I think query implements Accountable(#ramSizeInBytes), users can throw a 
exception better than OutOfMemory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5899) shenandoah GC can cause "ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum"

2014-08-28 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14113502#comment-14113502
 ] 

Littlestar commented on LUCENE-5899:


I test again, there are some strange exception

org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.NullPointerException
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: 
Invalid vLong detected (negative values disallowed)

I post at top.

> shenandoah GC can cause "ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum"
> --
>
> Key: LUCENE-5899
> URL: https://issues.apache.org/jira/browse/LUCENE-5899
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.9
> Environment: I test lucene on shenandoah with big memory + 
> bigdata(32g, 128g).   
> http://openjdk.java.net/jeps/189
> http://icedtea.classpath.org/hg/shenandoah/
>Reporter: Littlestar
>Priority: Minor
>
> User report of bizare ClassCastException when running some lucene code with 
> the (experimental) shenandoah GC
> {code}
> Exception in thread "Lucene Merge Thread #0" 
> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
> cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: java.lang.ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
>   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
>   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> {code}
> {code}
> Exception in thread "Lucene Merge Thread #0" 
> org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: 
> Invalid vLong detected (negative values disallowed)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: java.io.IOException: Invalid vLong detected (negative values 
> disallowed)
>   at org.apache.lucene.store.DataInput.readVLong(DataInput.java:193)
>   at 
> org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.(BlockTreeTermsReader.java:169)
>   at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:197)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
>   at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:120)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:107)
>   at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
>   at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:668)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4099)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> {code}
> {code}
> Exception in thread "Lucene Merge Thread #0" 
> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.NullPointerException
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeSch

[jira] [Updated] (LUCENE-5899) shenandoah GC can cause "ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum"

2014-08-28 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5899:
---

Description: 
User report of bizare ClassCastException when running some lucene code with the 
(experimental) shenandoah GC

{code}
Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
{code}


{code}
Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: 
Invalid vLong detected (negative values disallowed)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.io.IOException: Invalid vLong detected (negative values 
disallowed)
at org.apache.lucene.store.DataInput.readVLong(DataInput.java:193)
at 
org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.(BlockTreeTermsReader.java:169)
at 
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:197)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:107)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:668)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4099)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
{code}

{code}
Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.NullPointerException
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.NullPointerException
at 
org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.(BlockTreeTermsReader.java:169)
at 
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:197)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:107)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:668)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4099)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene

[jira] [Updated] (LUCENE-5899) shenandoah GC can cause "ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum"

2014-08-28 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5899:
---

Description: 
User report of bizare ClassCastException when running some lucene code with the 
(experimental) shenandoah GC

{code}
Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
{code}


{code}
Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: 
Invalid vLong detected (negative values disallowed)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.io.IOException: Invalid vLong detected (negative values 
disallowed)
at org.apache.lucene.store.DataInput.readVLong(DataInput.java:193)
at 
org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.(BlockTreeTermsReader.java:169)
at 
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:197)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:107)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:668)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4099)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
{code}

  was:
User report of bizare ClassCastException when running some lucene code with the 
(experimental) shenandoah GC

{code}
Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Conc

[jira] [Commented] (LUCENE-5899) shenandoah GC can cause "ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum"

2014-08-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14110144#comment-14110144
 ] 

Littlestar commented on LUCENE-5899:


I post a mail to shenandoah develop maillist
MARK here. 
http://icedtea.classpath.org/pipermail/shenandoah/2014-August/000163.html

Thanks.

> shenandoah GC can cause "ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum"
> --
>
> Key: LUCENE-5899
> URL: https://issues.apache.org/jira/browse/LUCENE-5899
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.9
> Environment: I test lucene on shenandoah with big memory + 
> bigdata(32g, 128g).   
> http://openjdk.java.net/jeps/189
> http://icedtea.classpath.org/hg/shenandoah/
>Reporter: Littlestar
>Priority: Minor
>
> User report of bizare ClassCastException when running some lucene code with 
> the (experimental) shenandoah GC
> {code}
> Exception in thread "Lucene Merge Thread #0" 
> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
> cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: java.lang.ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
>   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
>   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5899) Caused by: java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14108841#comment-14108841
 ] 

Littlestar edited comment on LUCENE-5899 at 8/25/14 7:22 AM:
-

I test lucene on shenandoah with big memory + bigdata(128G, 1.2T).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

It throws above exception.
I see the luence code, I think the exception is ok.
Because org.apache.lucene.codecs.MappingMultiDocsEnum is not instanceof  
org.apache.lucene.index.DocsAndPositionsEnum.



was (Author: cnstar9988):
I test lucene on shenandoah with big memory + bigdata(32g, 128g).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

It throws above exception.
I see the luence code, I think the exception is ok.
Because org.apache.lucene.codecs.MappingMultiDocsEnum is not instanceof  
org.apache.lucene.index.DocsAndPositionsEnum.


> Caused by: java.lang.ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum
> -
>
> Key: LUCENE-5899
> URL: https://issues.apache.org/jira/browse/LUCENE-5899
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.9
>Reporter: Littlestar
> Fix For: 4.10
>
>
> Exception in thread "Lucene Merge Thread #0" 
> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
> cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: java.lang.ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
>   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
>   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5899) Caused by: java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14108841#comment-14108841
 ] 

Littlestar commented on LUCENE-5899:


I test lucene on shenandoah with big memory + bigdata(32g, 128g).   
http://openjdk.java.net/jeps/189
http://icedtea.classpath.org/hg/shenandoah/

It throws above exception.
I see the luence code, I think the exception is ok.
Because org.apache.lucene.codecs.MappingMultiDocsEnum is not instanceof  
org.apache.lucene.index.DocsAndPositionsEnum.


> Caused by: java.lang.ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum
> -
>
> Key: LUCENE-5899
> URL: https://issues.apache.org/jira/browse/LUCENE-5899
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 4.9
>Reporter: Littlestar
> Fix For: 4.10
>
>
> Exception in thread "Lucene Merge Thread #0" 
> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
> cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: java.lang.ClassCastException: 
> org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
> org.apache.lucene.index.DocsAndPositionsEnum
>   at 
> org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
>   at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
>   at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5899) Caused by: java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum

2014-08-22 Thread Littlestar (JIRA)
Littlestar created LUCENE-5899:
--

 Summary: Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
 Key: LUCENE-5899
 URL: https://issues.apache.org/jira/browse/LUCENE-5899
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/codecs
Affects Versions: 4.9
Reporter: Littlestar
 Fix For: 4.10


Exception in thread "Lucene Merge Thread #0" 
org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.ClassCastException: org.apache.lucene.codecs.MappingMultiDocsEnum 
cannot be cast to org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.ClassCastException: 
org.apache.lucene.codecs.MappingMultiDocsEnum cannot be cast to 
org.apache.lucene.index.DocsAndPositionsEnum
at 
org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:127)
at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:110)
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4163)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3759)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5849) Scary "read past EOF" in RAMDir

2014-07-28 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5849:
---

Attachment: TestBinaryDocIndex.java

I reproduced.

when doc>maxDoc, it may throws this Exception.

> Scary "read past EOF" in RAMDir
> ---
>
> Key: LUCENE-5849
> URL: https://issues.apache.org/jira/browse/LUCENE-5849
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: TestBinaryDocIndex.java
>
>
> Nightly build hit this: 
> http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/91095
> And I'm able to repro at least once after beasting w/ the right JVM 
> (1.7.0_55) and G1GC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5849) Scary "read past EOF" in RAMDir

2014-07-28 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14076288#comment-14076288
 ] 

Littlestar commented on LUCENE-5849:


I meet the bug too when getting binaryDoc.
I have patch LUCENE-5681, wong too...

java.lang.RuntimeException: java.io.EOFException: read past EOF: 
RAMInputStream(name=RAMInputStream(name=_0_Lucene49_0.dvd) [slice=randomaccess])
at 
org.apache.lucene.util.packed.DirectReader$DirectPackedReader1.get(DirectReader.java:78)
at 
org.apache.lucene.codecs.lucene49.Lucene49DocValuesProducer$1.get(Lucene49DocValuesProducer.java:345)
at org.apache.lucene.util.LongValues.get(LongValues.java:45)

> Scary "read past EOF" in RAMDir
> ---
>
> Key: LUCENE-5849
> URL: https://issues.apache.org/jira/browse/LUCENE-5849
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> Nightly build hit this: 
> http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/91095
> And I'm able to repro at least once after beasting w/ the right JVM 
> (1.7.0_55) and G1GC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-20 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067934#comment-14067934
 ] 

Littlestar edited comment on LUCENE-5801 at 7/20/14 2:30 PM:
-

the fixs works failed(no exception throws, but result in wrong facet data).

default "$facets" not in facetFields.
so merge with wrong result.

the following works ok.
public OrdinalMappingAtomicReader(AtomicReader in, int[] ordinalMap, 
FacetsConfig srcConfig) {
super(in);
this.ordinalMap = ordinalMap;
facetsConfig = new InnerFacetsConfig();
facetFields = new HashSet<>();
+if (srcConfig.getDimConfigs().size() == 0) {
+facetFields.add(FacetsConfig.DEFAULT_DIM_CONFIG.indexFieldName);
+}
for (DimConfig dc : srcConfig.getDimConfigs().values()) {
  facetFields.add(dc.indexFieldName);
}
  }


was (Author: cnstar9988):
the fixs works failed.
because default "$facets" not in facetFields.
so merge with wrong result.

the following works ok.
public OrdinalMappingAtomicReader(AtomicReader in, int[] ordinalMap, 
FacetsConfig srcConfig) {
super(in);
this.ordinalMap = ordinalMap;
facetsConfig = new InnerFacetsConfig();
facetFields = new HashSet<>();
+if (srcConfig.getDimConfigs().size() == 0) {
+facetFields.add(FacetsConfig.DEFAULT_DIM_CONFIG.indexFieldName);
+}
for (DimConfig dc : srcConfig.getDimConfigs().values()) {
  facetFields.add(dc.indexFieldName);
}
  }

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-20 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067934#comment-14067934
 ] 

Littlestar commented on LUCENE-5801:


the fixs works failed.
because default "$facets" not in facetFields.
so merge with wrong result.

the following works ok.
public OrdinalMappingAtomicReader(AtomicReader in, int[] ordinalMap, 
FacetsConfig srcConfig) {
super(in);
this.ordinalMap = ordinalMap;
facetsConfig = new InnerFacetsConfig();
facetFields = new HashSet<>();
+if (srcConfig.getDimConfigs().size() == 0) {
+facetFields.add(FacetsConfig.DEFAULT_DIM_CONFIG.indexFieldName);
+}
for (DimConfig dc : srcConfig.getDimConfigs().values()) {
  facetFields.add(dc.indexFieldName);
}
  }

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067583#comment-14067583
 ] 

Littlestar edited comment on LUCENE-5801 at 7/19/14 5:05 PM:
-

I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I think OrdinalMappingBinaryDocValues#getBinaryDocValues is wrong in 4.10.0
it has no check whether the BinaryDocValuesField is FacetField or not, just 
wrapper it to OrdinalMappingBinaryDocValues.

in 4.6.1, it has checked whether the field  exist in dvFieldMap or not.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }



was (Author: cnstar9988):
I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I think getBinaryDocValues is wrong in 4.10.0
it has no check the BinaryDocValuesField is  FacetField or not, just wrapper it 
to OrdinalMappingBinaryDocValues.

in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }


> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067583#comment-14067583
 ] 

Littlestar edited comment on LUCENE-5801 at 7/19/14 4:55 PM:
-

I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I think getBinaryDocValues is wrong in 4.10.0
it has no check the BinaryDocValuesField is  FacetField or not, just wrapper it 
to OrdinalMappingBinaryDocValues.

in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }



was (Author: cnstar9988):
I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no check FacetField ?

I think getBinaryDocValues is wrong in 4.10.0
in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }


> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067586#comment-14067586
 ] 

Littlestar commented on LUCENE-5801:


the following patch works well for me.

OrdinalMappingAtomicReader(4.10.0)
@Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
  if (!field.equals(facetsConfig.getDimConfig(field).indexFieldName)) {
  return super.getBinaryDocValues(field);
  }
  
final OrdinalsReader ordsReader = getOrdinalsReader(field);
return new 
OrdinalMappingBinaryDocValues(ordsReader.getReader(in.getContext()));
  }

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067583#comment-14067583
 ] 

Littlestar edited comment on LUCENE-5801 at 7/19/14 4:45 PM:
-

I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no check FacetField ?

I think getBinaryDocValues is wrong in 4.10.0
in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }



was (Author: cnstar9988):
I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no differ between normal 
BinaryDocValuesField  or FacetField ??

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067583#comment-14067583
 ] 

Littlestar commented on LUCENE-5801:


I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no differ between normal 
BinaryDocValuesField  or FacetField ??

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14067556#comment-14067556
 ] 

Littlestar commented on LUCENE-5801:


4.6.1 works well, but 4.9.0 works fail.

In 4.6.1, I use CategoryPath.
In 4.9.0, I use FacetField.

4.9.0 missing OrdinalMappingAtomicReader, I get it from 4.10 trunk.
I use it for merging indexes with taxonomies.

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-18 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14066434#comment-14066434
 ] 

Littlestar commented on LUCENE-5801:


there is may be a java.lang.ArrayIndexOutOfBoundsException in this code.
4.6.1 works well, but 4.9.0 works fail.
why there only encode, no decode??

java.lang.ArrayIndexOutOfBoundsException: 118
at 
com.test.OrdinalMappingAtomicReader$OrdinalMappingBinaryDocValues.get(OrdinalMappingAtomicReader.java:100)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:263)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:224)
at 
org.apache.lucene.codecs.lucene49.Lucene49DocValuesConsumer.addBinaryField(Lucene49DocValuesConsumer.java:256)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:112)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:207)
at 
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:193)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:122)
at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2695)


 @SuppressWarnings("synthetic-access")
@Override
public BytesRef get(int docID) {
  try {
// NOTE: this isn't quite koscher, because in general
// multiple threads can call BinaryDV.get which would
// then conflict on the single ordinals instance, but
// because this impl is only used for merging, we know
// only 1 thread calls us:
ordsReader.get(docID, ordinals);

// map the ordinals
for (int i = 0; i < ordinals.length; i++) {
  ordinals.ints[i] = ordinalMap[ordinals.ints[i]];//here is a 
ArrayIndexOutOfBoundsException
}

return encode(ordinals);
  } catch (IOException e) {
throw new RuntimeException("error reading category ordinals for doc " + 
docID, e);
  } catch (Throwable e) {
  throw new RuntimeException("error reading category ordinals for doc " 
+ docID, e);   // these line only for breakpoint
  }
}
  }




ordinals
ints(id=257)
length  135 
offset  0   
[76 db 14d 18a 1bb 1de 241 2a7 315 352 387 3bf 3ee 424 458 48b 4be 4ed 523 558 
5b9 5ea 61f 650 681 6e7 714 74b 780 7e1 842 86f 8a3 906 96b 99d 9ca a03 a39 a71 
ad5 b02 b63 bc7 c29 c5d c8d cbe d1f d57 dba df2 e29 e61 e84 eed f53 fc1 ffe 
1060 1095 10fa 115b 118f 11c7 11fd 122e 125b 1292 12f8 135b 138d 13ba 13ee 1451 
1484 14b4 14e1 1543 1579 15db 163f 166c 16a5 170b 1744 1779 17de 1812 1847 187e 
18b1 18e8 191d 1950 1973 19e2 1a48 1aae 1aeb 1b1b 1b3e 1baa 1c0f 1c7d 1cba 1ceb 
1d1e 1d52 1d86 1db8 1ddb 1e3e 1eb0 1f13 1f50 1f81 1fb1 1fe9 2020 2059 208e 20c4 
20f7 2128 215f 2182 21f5 225b 22d3 2310 233e 23a8 2418 247f]

but "ordinalMap.length" only 10

> Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> -
>
> Key: LUCENE-5801
> URL: https://issues.apache.org/jira/browse/LUCENE-5801
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.7
>Reporter: Nicola Buso
>Assignee: Shai Erera
> Fix For: 5.0, 4.10
>
> Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
> LUCENE-5801_1.patch, LUCENE-5801_2.patch
>
>
> from lucene > 4.6.1 the class:
> org.apache.lucene.facet.util.OrdinalMappingAtomicReader
> was removed; resurrect it because used merging indexes related to merged 
> taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5228) IndexWriter.addIndexes copies raw files but acquires no locks

2014-01-26 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13882270#comment-13882270
 ] 

Littlestar commented on LUCENE-5228:


>>>IndexWriter.addIndexes(Directory[]) now acquires the IW write lock 
I think the write lock is too expensive for my app. a SnapshotDeletionPolicy is 
OK for some situation.
is there a way equal to IndexWriter.addIndexes(IndexCommit)?


> IndexWriter.addIndexes copies raw files but acquires no locks
> -
>
> Key: LUCENE-5228
> URL: https://issues.apache.org/jira/browse/LUCENE-5228
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5228.patch
>
>
> I see stuff like: "merge problem with lucene 3 and 4 indices" (from solr 
> users list), and cannot even think how to respond to these users because so 
> many things can go wrong with IndexWriter.addIndexes(Directory)
> it currently has in its javadocs:
> NOTE: the index in each Directory must not be changed (opened by a writer) 
> while this method is running. This method does not acquire a write lock in 
> each input Directory, so it is up to the caller to enforce this. 
> This method should be acquiring locks: its copying *RAW FILES*. Otherwise we 
> should remove it. If someone doesnt like that, or is mad because its 10ns 
> slower, they can use NoLockFactory. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5228) IndexWriter.addIndexes copies raw files but acquires no locks

2014-01-26 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13882266#comment-13882266
 ] 

Littlestar edited comment on LUCENE-5228 at 1/26/14 10:54 AM:
--

not  fixed in 4.6.1? Thanks.

>>>I see stuff like: "merge problem with lucene 3 and 4 indices" (from solr 
>>>users list), and cannot even think how to respond to these users because so 
>>>many things can go wrong with IndexWriter.addIndexes(Directory)
may related to LUCENE-5377


was (Author: cnstar9988):
not  fixed in 4.6.1? Thanks.

>>>I see stuff like: "merge problem with lucene 3 and 4 indices" (from solr 
>>>users list), and cannot even think how to respond to these users because so 
>>>many things can go wrong with IndexWriter.addIndexes(Directory)
may related to https://issues.apache.org/jira/browse/LUCENE-5377

> IndexWriter.addIndexes copies raw files but acquires no locks
> -
>
> Key: LUCENE-5228
> URL: https://issues.apache.org/jira/browse/LUCENE-5228
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5228.patch
>
>
> I see stuff like: "merge problem with lucene 3 and 4 indices" (from solr 
> users list), and cannot even think how to respond to these users because so 
> many things can go wrong with IndexWriter.addIndexes(Directory)
> it currently has in its javadocs:
> NOTE: the index in each Directory must not be changed (opened by a writer) 
> while this method is running. This method does not acquire a write lock in 
> each input Directory, so it is up to the caller to enforce this. 
> This method should be acquiring locks: its copying *RAW FILES*. Otherwise we 
> should remove it. If someone doesnt like that, or is mad because its 10ns 
> slower, they can use NoLockFactory. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5228) IndexWriter.addIndexes copies raw files but acquires no locks

2014-01-26 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13882266#comment-13882266
 ] 

Littlestar commented on LUCENE-5228:


not  fixed in 4.6.1? Thanks.

>>>I see stuff like: "merge problem with lucene 3 and 4 indices" (from solr 
>>>users list), and cannot even think how to respond to these users because so 
>>>many things can go wrong with IndexWriter.addIndexes(Directory)
may related to https://issues.apache.org/jira/browse/LUCENE-5377

> IndexWriter.addIndexes copies raw files but acquires no locks
> -
>
> Key: LUCENE-5228
> URL: https://issues.apache.org/jira/browse/LUCENE-5228
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5228.patch
>
>
> I see stuff like: "merge problem with lucene 3 and 4 indices" (from solr 
> users list), and cannot even think how to respond to these users because so 
> many things can go wrong with IndexWriter.addIndexes(Directory)
> it currently has in its javadocs:
> NOTE: the index in each Directory must not be changed (opened by a writer) 
> while this method is running. This method does not acquire a write lock in 
> each input Directory, so it is up to the caller to enforce this. 
> This method should be acquiring locks: its copying *RAW FILES*. Otherwise we 
> should remove it. If someone doesnt like that, or is mad because its 10ns 
> slower, they can use NoLockFactory. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5377) IW.addIndexes(Dir[]) causes silent index corruption

2014-01-22 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13879383#comment-13879383
 ] 

Littlestar commented on LUCENE-5377:


The patch works well for me, Thanks.

> IW.addIndexes(Dir[]) causes silent index corruption
> ---
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
>Assignee: Robert Muir
> Fix For: 5.0, 4.7, 4.6.1
>
> Attachments: LUCENE-5377.patch, LUCENE-5377.patch, TestCore_45.java, 
> TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5377) Lucene 4.6 cause mixed version segments info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13878294#comment-13878294
 ] 

Littlestar commented on LUCENE-5377:


I debug into it, Lucene 4.6's new format of segments.gen(segments_N) record bad 
segments info.


> Lucene 4.6 cause mixed version segments info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, modules/facet
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5377) Lucene 4.6 cause mixed version segments info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5377:
---

Summary: Lucene 4.6 cause mixed version segments info file(.si) wrong  
(was: Lucene upgrade cause mixed version segments info file(.si) wrong)

> Lucene 4.6 cause mixed version segments info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, modules/facet
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5377) Lucene upgrade cause mixed version segments info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5377:
---

Component/s: modules/facet
Summary: Lucene upgrade cause mixed version segments info file(.si) 
wrong  (was: Lucene mixed version segments cause segment info file(.si) wrong)

> Lucene upgrade cause mixed version segments info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, modules/facet
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5377) Lucene mixed version segments cause segment info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13877492#comment-13877492
 ] 

Littlestar edited comment on LUCENE-5377 at 1/21/14 2:54 PM:
-

reproduce steps:
1. run TestCore_45 in luence 4.5.1, it will generate a index 
TestCore_45/data/taxo.
2. copy TestCore_45/data/taxo into TestCore_46/data/taxo.
3. run TestCore_46 in luence 4.6.0.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==


was (Author: cnstar9988):
reproduce steps:
1. run TestCore_45 in luence 4.5.1, it will generate a index 
TestCore_45/data/taxo.
2. copy TestCore_45/data/taxo in TestCore_46/data/taxo.
3. run TestCore_46 in luence 4.6.0.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==

> Lucene mixed version segments cause segment info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-

[jira] [Comment Edited] (LUCENE-5377) Lucene mixed version segments cause segment info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13877492#comment-13877492
 ] 

Littlestar edited comment on LUCENE-5377 at 1/21/14 2:20 PM:
-

reproduce steps:
1. run TestCore_45 in luence 4.5.1, it will generate a index 
TestCore_45/data/taxo.
2. copy TestCore_45/data/taxo in TestCore_46/data/taxo.
3. run TestCore_46 in luence 4.6.0.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==


was (Author: cnstar9988):
reproduce steps:
1. run TestCore_45 in luence 4.5.1, it generate a index TestCore_45/data/taxo.
2. copy data/taxo in TestCore_46/data/taxo.
3. run TestCore_46 in luence 4.6.0.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==

> Lucene mixed version segments cause segment info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsu

[jira] [Updated] (LUCENE-5377) Lucene mixed version segments cause segment info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5377:
---

Attachment: TestCore_46.java
TestCore_45.java

reproduce steps:
1. run TestCore_45 in luence 4.5, it generate a index TestCore_45/data/taxo.
2. copy data/taxo in TestCore_46/data/taxo.
3. run TestCore_46.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==

> Lucene mixed version segments cause segment info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5377) Lucene mixed version segments cause segment info file(.si) wrong

2014-01-21 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13877492#comment-13877492
 ] 

Littlestar edited comment on LUCENE-5377 at 1/21/14 2:18 PM:
-

reproduce steps:
1. run TestCore_45 in luence 4.5.1, it generate a index TestCore_45/data/taxo.
2. copy data/taxo in TestCore_46/data/taxo.
3. run TestCore_46 in luence 4.6.0.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==


was (Author: cnstar9988):
reproduce steps:
1. run TestCore_45 in luence 4.5, it generate a index TestCore_45/data/taxo.
2. copy data/taxo in TestCore_46/data/taxo.
3. run TestCore_46.
you will see the following errors.
Exception in thread "main" java.io.FileNotFoundException: xception in thread 
"main" java.io.FileNotFoundException: 
G:\JavaWorkspace\bug_test\Lucene_46\data\taxo\_2.si (.)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at 
org.apache.lucene.store.FSDirectory$FSIndexInput.(FSDirectory.java:388)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:103)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:58)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:51)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:340)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.readCommitData(DirectoryTaxonomyWriter.java:138)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:222)
at 
org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.(DirectoryTaxonomyWriter.java:331)
at TestCore_46$1.(TestCore_46.java:27)
at TestCore_46.open(TestCore_46.java:27)
at TestCore_46.main(TestCore_46.java:75)


==

> Lucene mixed version segments cause segment info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
> Attachments: TestCore_45.java, TestCore_46.java
>
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@luc

[jira] [Updated] (LUCENE-5377) Lucene mixed version segments cause segment info file(.si) wrong

2013-12-23 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5377:
---

Summary: Lucene mixed version segments cause segment info file(.si) wrong  
(was: Lucene mixed index segments cause segment info file(.si) unversioned)

> Lucene mixed version segments cause segment info file(.si) wrong
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5377) Lucene mixed index segments cause segment info file(.si) unversioned

2013-12-23 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13856110#comment-13856110
 ] 

Littlestar commented on LUCENE-5377:


Lucene 4.5/4.5.1 is ok.


> Lucene mixed index segments cause segment info file(.si) unversioned
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5377) Lucene mixed index segments cause segment info file(.si) unversioned

2013-12-23 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13856110#comment-13856110
 ] 

Littlestar edited comment on LUCENE-5377 at 12/24/13 4:07 AM:
--

Lucene 4.5/4.5.1 is ok.
but failed in 4.6.0


was (Author: cnstar9988):
Lucene 4.5/4.5.1 is ok.


> Lucene mixed index segments cause segment info file(.si) unversioned
> 
>
> Key: LUCENE-5377
> URL: https://issues.apache.org/jira/browse/LUCENE-5377
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.6
> Environment: windows/linux
>Reporter: Littlestar
>
> my old facet index create by Lucene version=4.2
> use indexChecker ok.
> now I upgrade to Lucene 4.6 and put some new records to index.
> then reopen index, some files in indexdir missing
> no .si files.
> I debug into it,  new version format of segments.gen(segments_N) record bad 
> segments info.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5377) Lucene mixed index segments cause segment info file(.si) unversioned

2013-12-23 Thread Littlestar (JIRA)
Littlestar created LUCENE-5377:
--

 Summary: Lucene mixed index segments cause segment info file(.si) 
unversioned
 Key: LUCENE-5377
 URL: https://issues.apache.org/jira/browse/LUCENE-5377
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.6
 Environment: windows/linux
Reporter: Littlestar


my old facet index create by Lucene version=4.2
use indexChecker ok.

now I upgrade to Lucene 4.6 and put some new records to index.
then reopen index, some files in indexdir missing
no .si files.

I debug into it,  new version format of segments.gen(segments_N) record bad 
segments info.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5212) java 7u40 causes sigsegv and corrupt term vectors

2013-10-13 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793916#comment-13793916
 ] 

Littlestar commented on LUCENE-5212:


Is there a small corrupt index can reproduce the problem by CheckIndex, Thanks.

> java 7u40 causes sigsegv and corrupt term vectors
> -
>
> Key: LUCENE-5212
> URL: https://issues.apache.org/jira/browse/LUCENE-5212
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: crashFaster2.0.patch, crashFaster.patch, 
> hs_err_pid32714.log, jenkins.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790236#comment-13790236
 ] 

Littlestar commented on LUCENE-5267:


Thanks, most of records recoverd.
But why index got corrupted? mybe compress or writer has bug ...


> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>Assignee: Adrien Grand
>  Labels: LZ4
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790134#comment-13790134
 ] 

Littlestar commented on LUCENE-5267:


when ArrayIndexOutOfBoundsException omit
ERROR [Invalid vLong detected (negative values disallowed)]
java.lang.RuntimeException: Invalid vLong detected (negative values disallowed)
at 
org.apache.lucene.store.ByteArrayDataInput.readVLong(ByteArrayDataInput.java:152)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:342)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
at 
org.apache.lucene.index.CheckIndex.testStoredFields(CheckIndex.java:1268)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:626)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...OK [12 docvalues fields; 7 BINARY; 3 NUMERIC; 2 
SORTED; 0 SORTED_SET]

> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>Assignee: Adrien Grand
>  Labels: LZ4
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790131#comment-13790131
 ] 

Littlestar commented on LUCENE-5267:


// Lucene42Codec + LZ4
public final class Hybase42StandardCodec extends FilterCodec {
public Hybase42StandardCodec() {
super("hybaseStd42x", new Lucene42Codec());
}
}

>>>disk-related issues in your system logs and share the .fdx and .fdt files of 
>>>the broken segment
too big(5G)

> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>Assignee: Adrien Grand
>  Labels: LZ4
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-09 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790117#comment-13790117
 ] 

Littlestar commented on LUCENE-5267:


dOff - matchDec <0, so throws java.lang.ArrayIndexOutOfBoundsException
dest.length=33288,dOff=3184,matchDec=34510,matchLen=15,fastLen=16
dest.length=33288,dOff=3213,matchDec=34724,matchLen=9,fastLen=16
dest.length=33288,dOff=3229,matchDec=45058,matchLen=12,fastLen=16
dest.length=33288,dOff=3255,matchDec=20482,matchLen=9,fastLen=16
dest.length=33288,dOff=3275,matchDec=26122,matchLen=12,fastLen=16
dest.length=33288,dOff=3570,matchDec=35228,matchLen=6,fastLen=8



> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>Assignee: Adrien Grand
>  Labels: LZ4
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789958#comment-13789958
 ] 

Littlestar edited comment on LUCENE-5267 at 10/9/13 6:54 AM:
-

{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  ..

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
try {
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);
}catch(Throwable e) {
System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",matchLen=" + matchLen + ",fastLen=" + fastLen);
}
dOff += matchLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}



was (Author: cnstar9988):
{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  ..

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
   // System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}


> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>  Labels: LZ4
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5267:
---

Labels: LZ4  (was: )

> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>  Labels: LZ4
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789958#comment-13789958
 ] 

Littlestar edited comment on LUCENE-5267 at 10/9/13 3:00 AM:
-

{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  ..

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
   // System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}



was (Author: cnstar9988):
{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  ..

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
   // System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;   //here maybe dOff += fastLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}


> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789958#comment-13789958
 ] 

Littlestar edited comment on LUCENE-5267 at 10/9/13 2:50 AM:
-

{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  ..

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
   // System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;   //here maybe dOff += fastLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}



was (Author: cnstar9988):
{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  // literals
  final int token = compressed.readByte() & 0xFF;
  int literalLen = token >>> 4;

  if (literalLen != 0) {
if (literalLen == 0x0F) {
  byte len;
  while ((len = compressed.readByte()) == (byte) 0xFF) {
literalLen += 0xFF;
  }
  literalLen += len & 0xFF;
}
compressed.readBytes(dest, dOff, literalLen);
dOff += literalLen;
  }

  if (dOff >= decompressedLen) {
break;
  }

  // matchs
  final int matchDec = (compressed.readByte() & 0xFF) | 
((compressed.readByte() & 0xFF) << 8);
  assert matchDec > 0;

  int matchLen = token & 0x0F;
  if (matchLen == 0x0F) {
int len;
while ((len = compressed.readByte()) == (byte) 0xFF) {
  matchLen += 0xFF;
}
matchLen += len & 0xFF;
  }
  matchLen += MIN_MATCH;

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
   // System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;   //here maybe dOff += fastLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}


> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789958#comment-13789958
 ] 

Littlestar edited comment on LUCENE-5267 at 10/9/13 2:41 AM:
-

{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  // literals
  final int token = compressed.readByte() & 0xFF;
  int literalLen = token >>> 4;

  if (literalLen != 0) {
if (literalLen == 0x0F) {
  byte len;
  while ((len = compressed.readByte()) == (byte) 0xFF) {
literalLen += 0xFF;
  }
  literalLen += len & 0xFF;
}
compressed.readBytes(dest, dOff, literalLen);
dOff += literalLen;
  }

  if (dOff >= decompressedLen) {
break;
  }

  // matchs
  final int matchDec = (compressed.readByte() & 0xFF) | 
((compressed.readByte() & 0xFF) << 8);
  assert matchDec > 0;

  int matchLen = token & 0x0F;
  if (matchLen == 0x0F) {
int len;
while ((len = compressed.readByte()) == (byte) 0xFF) {
  matchLen += 0xFF;
}
matchLen += len & 0xFF;
  }
  matchLen += MIN_MATCH;

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
   // System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;   //here maybe dOff += fastLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}



was (Author: cnstar9988):
{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  // literals
  final int token = compressed.readByte() & 0xFF;
  int literalLen = token >>> 4;

  if (literalLen != 0) {
if (literalLen == 0x0F) {
  byte len;
  while ((len = compressed.readByte()) == (byte) 0xFF) {
literalLen += 0xFF;
  }
  literalLen += len & 0xFF;
}
compressed.readBytes(dest, dOff, literalLen);
dOff += literalLen;
  }

  if (dOff >= decompressedLen) {
break;
  }

  // matchs
  final int matchDec = (compressed.readByte() & 0xFF) | 
((compressed.readByte() & 0xFF) << 8);
  assert matchDec > 0;

  int matchLen = token & 0x0F;
  if (matchLen == 0x0F) {
int len;
while ((len = compressed.readByte()) == (byte) 0xFF) {
  matchLen += 0xFF;
}
matchLen += len & 0xFF;
  }
  matchLen += MIN_MATCH;

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;   //here maybe dOff += fastLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}


> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apa

[jira] [Commented] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789958#comment-13789958
 ] 

Littlestar commented on LUCENE-5267:


{noformat}
public static int decompress(DataInput compressed, int decompressedLen, byte[] 
dest, int dOff) throws IOException {
final int destEnd = dest.length;

do {
  // literals
  final int token = compressed.readByte() & 0xFF;
  int literalLen = token >>> 4;

  if (literalLen != 0) {
if (literalLen == 0x0F) {
  byte len;
  while ((len = compressed.readByte()) == (byte) 0xFF) {
literalLen += 0xFF;
  }
  literalLen += len & 0xFF;
}
compressed.readBytes(dest, dOff, literalLen);
dOff += literalLen;
  }

  if (dOff >= decompressedLen) {
break;
  }

  // matchs
  final int matchDec = (compressed.readByte() & 0xFF) | 
((compressed.readByte() & 0xFF) << 8);
  assert matchDec > 0;

  int matchLen = token & 0x0F;
  if (matchLen == 0x0F) {
int len;
while ((len = compressed.readByte()) == (byte) 0xFF) {
  matchLen += 0xFF;
}
matchLen += len & 0xFF;
  }
  matchLen += MIN_MATCH;

  // copying a multiple of 8 bytes can make decompression from 5% to 10% 
faster
  final int fastLen = (matchLen + 7) & 0xFFF8;
  if (matchDec < matchLen || dOff + fastLen > destEnd) {
// overlap -> naive incremental copy
for (int ref = dOff - matchDec, end = dOff + matchLen; dOff < end; 
++ref, ++dOff) {
  dest[dOff] = dest[ref];
}
  } else {
// no overlap -> arraycopy
System.out.println("dest.length=" + dest.length + ",dOff=" + dOff + 
",matchDec=" + matchDec + ",fastLen=" + fastLen);
System.arraycopy(dest, dOff - matchDec, dest, dOff, fastLen);//here 
throws java.lang.ArrayIndexOutOfBoundsException
dOff += matchLen;   //here maybe dOff += fastLen;
  }
} while (dOff < decompressedLen);

return dOff;
  }
{noformat}


> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on LZ4

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789918#comment-13789918
 ] 

Littlestar edited comment on LUCENE-5267 at 10/9/13 1:25 AM:
-

lucene 4.5 also failed.

org.apache.lucene.index.CheckIndex /ssd/bad_lz4_index
===
egments file=segments_1r7 numSegments=1 version=4.4 format= 
userData={$SEQ$VAL$=0}
  1 of 1: name=_1xt docCount=1490908
codec=hybaseStd42x
compound=false
numFiles=13
size (MB)=4,895.955
diagnostics = {timestamp=1378489485190, os=Linux, 
os.version=2.6.32-220.el6.x86_64, mergeFactor=18, source=merge, 
lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [58 fields]
test: field norms.OK [11 fields]
test: terms, freq, prox...OK [23042711 terms; 1058472153 terms/docs pairs; 
935028031 tokens]
test: stored fields...ERROR [null]
java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
at 
org.apache.lucene.index.CheckIndex.testStoredFields(CheckIndex.java:1268)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:626)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...OK [12 docvalues fields; 7 BINARY; 3 NUMERIC; 2 
SORTED; 0 SORTED_SET]
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: Stored Field test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:640)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)

WARNING: 1 broken segments (containing 1490908 documents) detected
WARNING: would write new segments file, and 1490908 documents would be lost, if 
-fix were specified


was (Author: cnstar9988):
lucene 4.5 also failed.

egments file=segments_1r7 numSegments=1 version=4.4 format= 
userData={$SEQ$VAL$=0}
  1 of 1: name=_1xt docCount=1490908
codec=hybaseStd42x
compound=false
numFiles=13
size (MB)=4,895.955
diagnostics = {timestamp=1378489485190, os=Linux, 
os.version=2.6.32-220.el6.x86_64, mergeFactor=18, source=merge, 
lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [58 fields]
test: field norms.OK [11 fields]
test: terms, freq, prox...OK [23042711 terms; 1058472153 terms/docs pairs; 
935028031 tokens]
test: stored fields...ERROR [null]
java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
at 
org.apache.lucene.index.CheckIndex.testStoredFields(CheckIndex.java:1268)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:626)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...OK [12 docvalues fields; 7 BINARY; 3 NUMERIC; 2 
SORTED; 0 SORTED_SET]
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: Stored Field test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:640)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)

WARNING: 1 broken segments (containing 1490908 documents) detected
WARNING: would write new segments file, and 1490908 documents would be lost, if 
-fix were specified

> java.lang.ArrayIndexOutOfBoundsException on LZ4
> ---
>
> Key: LUCENE-5267
> URL: https://issues.apache.or

[jira] [Updated] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on reading data

2013-10-08 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5267:
---

Description: 
java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at 
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at 
org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
at 
org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
at 
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)


  was:

java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at 
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at 
org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
at 
org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
at 
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)


Summary: java.lang.ArrayIndexOutOfBoundsException on reading data  
(was: java.lang.ArrayIndexOutOfBoundsException on LZ4)

> java.lang.ArrayIndexOutOfBoundsException on reading data
> 
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on LZ4

2013-10-08 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789918#comment-13789918
 ] 

Littlestar commented on LUCENE-5267:


lucene 4.5 also failed.

egments file=segments_1r7 numSegments=1 version=4.4 format= 
userData={$SEQ$VAL$=0}
  1 of 1: name=_1xt docCount=1490908
codec=hybaseStd42x
compound=false
numFiles=13
size (MB)=4,895.955
diagnostics = {timestamp=1378489485190, os=Linux, 
os.version=2.6.32-220.el6.x86_64, mergeFactor=18, source=merge, 
lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [58 fields]
test: field norms.OK [11 fields]
test: terms, freq, prox...OK [23042711 terms; 1058472153 terms/docs pairs; 
935028031 tokens]
test: stored fields...ERROR [null]
java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
at 
org.apache.lucene.index.CheckIndex.testStoredFields(CheckIndex.java:1268)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:626)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...OK [12 docvalues fields; 7 BINARY; 3 NUMERIC; 2 
SORTED; 0 SORTED_SET]
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: Stored Field test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:640)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1903)

WARNING: 1 broken segments (containing 1490908 documents) detected
WARNING: would write new segments file, and 1490908 documents would be lost, if 
-fix were specified

> java.lang.ArrayIndexOutOfBoundsException on LZ4
> ---
>
> Key: LUCENE-5267
> URL: https://issues.apache.org/jira/browse/LUCENE-5267
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
>Reporter: Littlestar
>
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
>   at 
> org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
>   at 
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
>   at 
> org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
>   at 
> org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
>   at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
>   at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5267) java.lang.ArrayIndexOutOfBoundsException on LZ4

2013-10-08 Thread Littlestar (JIRA)
Littlestar created LUCENE-5267:
--

 Summary: java.lang.ArrayIndexOutOfBoundsException on LZ4
 Key: LUCENE-5267
 URL: https://issues.apache.org/jira/browse/LUCENE-5267
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.4
Reporter: Littlestar



java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:132)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:336)
at 
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:133)
at 
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at 
org.apache.lucene.index.SlowCompositeReaderWrapper.document(SlowCompositeReaderWrapper.java:212)
at 
org.apache.lucene.index.FilterAtomicReader.document(FilterAtomicReader.java:365)
at 
org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at org.apache.lucene.index.IndexReader.document(IndexReader.java:447)
at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:204)




--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778368#comment-13778368
 ] 

Littlestar commented on LUCENE-5218:


patch tested OK.
please submit to trunk/trunk4x/trunk45, thanks.

Checking only these segments: _d8:
  44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...OK [0 total doc count; 13 docvalues fields]

No problems were detected with this index.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>Assignee: Michael McCandless
> Attachments: lucene44-LUCENE-5218.zip, LUCENE-5218.patch
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777672#comment-13777672
 ] 

Littlestar commented on LUCENE-5218:


mybe binary doc length=0.
myapp convert string to byte[], adding to binaryDocvalues.

above patch works for me.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 3:07 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blockSize=65536, blockBits=16, blockMask=65535, 
blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blockSize=65536, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 3:02 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blockSize=65536, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexW

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 2:59 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 2:58 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
  

  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: j

[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar commented on LUCENE-5218:


{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
  


> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5212) java 7u40 causes sigsegv and corrupt term vectors

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777168#comment-13777168
 ] 

Littlestar commented on LUCENE-5212:


java 7u40? mybe also corrupt docvalue.

LUCENE-5212
my index corrupt with anotehr error.
LUCENE-5218 background merge hit exception && Caused by: 
java.lang.ArrayIndexOutOfBoundsException



> java 7u40 causes sigsegv and corrupt term vectors
> -
>
> Key: LUCENE-5212
> URL: https://issues.apache.org/jira/browse/LUCENE-5212
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: crashFaster2.0.patch, crashFaster.patch, 
> hs_err_pid32714.log, jenkins.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5212) java 7u40 causes sigsegv and corrupt term vectors

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777168#comment-13777168
 ] 

Littlestar edited comment on LUCENE-5212 at 9/25/13 5:28 AM:
-

java 7u40? mybe also corrupt docvalue.

LUCENE-5218 background merge hit exception && Caused by: 
java.lang.ArrayIndexOutOfBoundsException



  was (Author: cnstar9988):
java 7u40? mybe also corrupt docvalue.

LUCENE-5212
my index corrupt with anotehr error.
LUCENE-5218 background merge hit exception && Caused by: 
java.lang.ArrayIndexOutOfBoundsException


  
> java 7u40 causes sigsegv and corrupt term vectors
> -
>
> Key: LUCENE-5212
> URL: https://issues.apache.org/jira/browse/LUCENE-5212
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: crashFaster2.0.patch, crashFaster.patch, 
> hs_err_pid32714.log, jenkins.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776189#comment-13776189
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 10:51 AM:
--

I attached testindex and testcode.
Some files in test_fail_index was deleted except segment "_d8". (full indexSize 
is 5G(54 segments), only segment "_d8" needed for testcase)

java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar 
org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index

Thanks.

  was (Author: cnstar9988):
I attached testindex and testcode.
Some files in test_fail_index was deleted except segment "_d8".

java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar 
org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index

Thanks.
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776189#comment-13776189
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 10:47 AM:
--

I attached testindex and testcode.
Some files in test_fail_index was deleted except segment "_d8".

java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar 
org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index

Thanks.

  was (Author: cnstar9988):
I attached testdata and testcode.
Some files in test_fail_index was deleted except segment "_d8".

java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar 
org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index

Thanks.
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5218:
---

Attachment: lucene44-LUCENE-5218.zip

I attached testdata and testcode.
Some files in test_fail_index was deleted except segment "_d8".

java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar 
org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index

Thanks.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776068#comment-13776068
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 7:51 AM:
-

org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir
failed agian.

how to narrow it down, thanks.
I can reduced index files, mininal files, contain _d8, CheckIndex failed with 
above errors.
Can I upload the following index files? (32M)
{noformat} 
segments_54d
*.si
_d8.cfe
_d8.cfs
{noformat} 


  was (Author: cnstar9988):
org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir
failed agian.

how to narrow it down, thanks.
I can reduced index files, mininal files, contain _d8, CheckIndex failed with 
above errors.
Can I upload the following index files? 
{noformat} 
segments_54d
*.si
_d8.cfe
_d8.cfs
{noformat} 

  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776068#comment-13776068
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 7:49 AM:
-

org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir
failed agian.

how to narrow it down, thanks.
I can reduced index files, mininal files, contain _d8, CheckIndex failed with 
above errors.
Can I upload the following index files? 
{noformat} 
segments_54d
*.si
_d8.cfe
_d8.cfs
{noformat} 


  was (Author: cnstar9988):
org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir
failed agian.

how to narrow it down, thanks.
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776059#comment-13776059
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 7:25 AM:
-

I can't narrow it down from an empty index.
but I backup the failed index (5G).

I run core\src\java\org\apache\lucene\index\CheckIndex.java
maybe one segment failed.

44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...ERROR [2]
java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: DocValues test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)

  45 of 54: name=_3xn docCount=215407

  was (Author: cnstar9988):
I can't narrow it down, but I backup the index (5G).

I run core\src\java\org\apache\lucene\index\CheckIndex.java
maybe one segment failed.

44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...ERROR [2]
java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: DocValues test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)

  45 of 54: name=_3xn docCount=215407
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776059#comment-13776059
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 7:22 AM:
-

I can't narrow it down, but I backup the index (5G).

I run core\src\java\org\apache\lucene\index\CheckIndex.java
maybe one segment failed.

44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...ERROR [2]
java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: DocValues test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)

  45 of 54: name=_3xn docCount=215407

  was (Author: cnstar9988):
I can't narrow it down, but I backup the index (5G)

I run core\src\java\org\apache\lucene\index\CheckIndex.java

44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...ERROR [2]
java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: DocValues test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)

  45 of 54: name=_3xn docCount=215407
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c

[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776068#comment-13776068
 ] 

Littlestar commented on LUCENE-5218:


org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir
failed agian.

how to narrow it down, thanks.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776059#comment-13776059
 ] 

Littlestar edited comment on LUCENE-5218 at 9/24/13 7:16 AM:
-

I can't narrow it down, but I backup the index (5G)

I run core\src\java\org\apache\lucene\index\CheckIndex.java

44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...ERROR [2]
java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: DocValues test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)

  45 of 54: name=_3xn docCount=215407

  was (Author: cnstar9988):
I can't narrow it down, but I backup the index (5G)

I run core\src\java\org\apache\lucene\index\CheckIndex.java

java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)


  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:1

[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-24 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776059#comment-13776059
 ] 

Littlestar commented on LUCENE-5218:


I can't narrow it down, but I backup the index (5G)

I run core\src\java\org\apache\lucene\index\CheckIndex.java

java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
at 
org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
at 
org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)



> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5230) CJKAnalyzer can't split ";"

2013-09-20 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5230:
---

Description: 
@Test
public void test_AlphaNumAnalyzer() throws IOException {
Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
TokenStream token = analyzer.tokenStream("test", new 
StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
   token.reset(); //here
   while (token.incrementToken()) {
final CharTermAttribute termAtt = 
token.addAttribute(CharTermAttribute.class);

System.out.println(termAtt.toString());
}
analyzer.close();
}

  was:
@Test
public void test_AlphaNumAnalyzer() throws IOException {
Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
TokenStream token = analyzer.tokenStream("test", new 
StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
while (token.incrementToken()) {
final CharTermAttribute termAtt = 
token.addAttribute(CharTermAttribute.class);

System.out.println(termAtt.toString());
}
analyzer.close();
}


> CJKAnalyzer can't split ";"
> ---
>
> Key: LUCENE-5230
> URL: https://issues.apache.org/jira/browse/LUCENE-5230
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Littlestar
>Priority: Minor
>
> @Test
> public void test_AlphaNumAnalyzer() throws IOException {
> Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
> TokenStream token = analyzer.tokenStream("test", new 
> StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
>token.reset(); //here
>while (token.incrementToken()) {
> final CharTermAttribute termAtt = 
> token.addAttribute(CharTermAttribute.class);
> System.out.println(termAtt.toString());
> }
> analyzer.close();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5230) CJKAnalyzer java.lang.NullPointerException

2013-09-20 Thread Littlestar (JIRA)
Littlestar created LUCENE-5230:
--

 Summary: CJKAnalyzer java.lang.NullPointerException
 Key: LUCENE-5230
 URL: https://issues.apache.org/jira/browse/LUCENE-5230
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.4
Reporter: Littlestar
Priority: Minor


@Test
public void test_AlphaNumAnalyzer() throws IOException {
Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
TokenStream token = analyzer.tokenStream("test", new 
StringReader("中国"));
//TokenStream token = analyzer.tokenStream("test", new 
StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
while (token.incrementToken()) {
final CharTermAttribute termAtt = 
token.addAttribute(CharTermAttribute.class);

System.out.println(termAtt.toString());
}
analyzer.close();
}



java.lang.NullPointerException
at 
org.apache.lucene.analysis.standard.StandardTokenizerImpl.zzRefill(StandardTokenizerImpl.java:923)
at 
org.apache.lucene.analysis.standard.StandardTokenizerImpl.getNextToken(StandardTokenizerImpl.java:1133)
at 
org.apache.lucene.analysis.standard.StandardTokenizer.incrementToken(StandardTokenizer.java:171)
at 
org.apache.lucene.analysis.cjk.CJKWidthFilter.incrementToken(CJKWidthFilter.java:63)
at 
org.apache.lucene.analysis.core.LowerCaseFilter.incrementToken(LowerCaseFilter.java:54)
at 
org.apache.lucene.analysis.cjk.CJKBigramFilter.doNext(CJKBigramFilter.java:240)
at 
org.apache.lucene.analysis.cjk.CJKBigramFilter.incrementToken(CJKBigramFilter.java:169)
at 
org.apache.lucene.analysis.util.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:81)



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5230) CJKAnalyzer java.lang.NullPointerException

2013-09-20 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5230:
---

Summary: CJKAnalyzer java.lang.NullPointerException  (was: CJKAnalyzer 
can't split ";")

fixed. thanks.

I want to split CJK string with ";" and CJK bigram, but failed.

> CJKAnalyzer java.lang.NullPointerException
> --
>
> Key: LUCENE-5230
> URL: https://issues.apache.org/jira/browse/LUCENE-5230
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Littlestar
>Priority: Minor
>
> @Test
> public void test_AlphaNumAnalyzer() throws IOException {
> Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
> TokenStream token = analyzer.tokenStream("test", new 
> StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
>token.reset(); //here
>while (token.incrementToken()) {
> final CharTermAttribute termAtt = 
> token.addAttribute(CharTermAttribute.class);
> System.out.println(termAtt.toString());
> }
> analyzer.close();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5230) CJKAnalyzer can't split ";"

2013-09-20 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5230:
---

Description: 
@Test
public void test_AlphaNumAnalyzer() throws IOException {
Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
TokenStream token = analyzer.tokenStream("test", new 
StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
while (token.incrementToken()) {
final CharTermAttribute termAtt = 
token.addAttribute(CharTermAttribute.class);

System.out.println(termAtt.toString());
}
analyzer.close();
}

  was:
@Test
public void test_AlphaNumAnalyzer() throws IOException {
Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
TokenStream token = analyzer.tokenStream("test", new 
StringReader("中国"));
//TokenStream token = analyzer.tokenStream("test", new 
StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
while (token.incrementToken()) {
final CharTermAttribute termAtt = 
token.addAttribute(CharTermAttribute.class);

System.out.println(termAtt.toString());
}
analyzer.close();
}



java.lang.NullPointerException
at 
org.apache.lucene.analysis.standard.StandardTokenizerImpl.zzRefill(StandardTokenizerImpl.java:923)
at 
org.apache.lucene.analysis.standard.StandardTokenizerImpl.getNextToken(StandardTokenizerImpl.java:1133)
at 
org.apache.lucene.analysis.standard.StandardTokenizer.incrementToken(StandardTokenizer.java:171)
at 
org.apache.lucene.analysis.cjk.CJKWidthFilter.incrementToken(CJKWidthFilter.java:63)
at 
org.apache.lucene.analysis.core.LowerCaseFilter.incrementToken(LowerCaseFilter.java:54)
at 
org.apache.lucene.analysis.cjk.CJKBigramFilter.doNext(CJKBigramFilter.java:240)
at 
org.apache.lucene.analysis.cjk.CJKBigramFilter.incrementToken(CJKBigramFilter.java:169)
at 
org.apache.lucene.analysis.util.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:81)



Summary: CJKAnalyzer can't split ";"  (was: CJKAnalyzer 
java.lang.NullPointerException)

> CJKAnalyzer can't split ";"
> ---
>
> Key: LUCENE-5230
> URL: https://issues.apache.org/jira/browse/LUCENE-5230
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Littlestar
>Priority: Minor
>
> @Test
> public void test_AlphaNumAnalyzer() throws IOException {
> Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
> TokenStream token = analyzer.tokenStream("test", new 
> StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
> while (token.incrementToken()) {
> final CharTermAttribute termAtt = 
> token.addAttribute(CharTermAttribute.class);
> System.out.println(termAtt.toString());
> }
> analyzer.close();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5230) CJKAnalyzer can't split ";"

2013-09-20 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13773147#comment-13773147
 ] 

Littlestar commented on LUCENE-5230:


sorry, I miss reset.
I want to split with ";".

> CJKAnalyzer can't split ";"
> ---
>
> Key: LUCENE-5230
> URL: https://issues.apache.org/jira/browse/LUCENE-5230
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Littlestar
>Priority: Minor
>
> @Test
> public void test_AlphaNumAnalyzer() throws IOException {
> Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_44);
> TokenStream token = analyzer.tokenStream("test", new 
> StringReader("0009bf2d97e9f86a7188002a64a84b351379323870284;0009bf2e97e9f8707188002a64a84b351379323870273;000ae1f0b4390779eed1002a64a8a7950;0001e87997e9f0017188000a64a84b351378869697875;fff205ce319b68ff1a3c002964a820841377769850018;000ae1f0b439077beed1002a64a8a7950;000ae1f1b439077deed1002a64a8a7950;0009bf2d97e9f86c7188002a64a84b351379323870281;0015adfd0c69d870debb000a64a8477c1378809423441"));
>token.reset(); //here
>while (token.incrementToken()) {
> final CharTermAttribute termAtt = 
> token.addAttribute(CharTermAttribute.class);
> System.out.println(termAtt.toString());
> }
> analyzer.close();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-17 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769327#comment-13769327
 ] 

Littlestar commented on LUCENE-5218:


jvm args:
-Xss256k -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:MaxGCPauseMillis=200 
-XX:GCPauseIntervalMillis=1000 -XX:NewRatio=2 -XX:SurvivorRatio=1 
maybe UseG1GC has bug?

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-16 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768344#comment-13768344
 ] 

Littlestar commented on LUCENE-5218:


my app continue insert records, may be 10-1 records per seconds.
lucene index with very small segments, so I call forceMerge(80) before each 
call.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-16 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768344#comment-13768344
 ] 

Littlestar edited comment on LUCENE-5218 at 9/16/13 2:22 PM:
-

my app continue insert records, may be 10-1 records per seconds.
lucene index with a lots of small segments, so I call forceMerge(80) before 
each call.


  was (Author: cnstar9988):
my app continue insert records, may be 10-1 records per seconds.
lucene index with very small segments, so I call forceMerge(80) before each 
call.
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-16 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768322#comment-13768322
 ] 

Littlestar edited comment on LUCENE-5218 at 9/16/13 2:09 PM:
-

java version "1.7.0_25"
I also build jdk 7u40 with openjdk-7u40-fcs-src-b43-26_aug_2013.zip
two jdks has same problem.


  was (Author: cnstar9988):
java version "1.7.0_25"
I also build openjdk 7u40 with openjdk-7u40-fcs-src-b43-26_aug_2013.zip
two jdks has same problem.

  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-16 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768322#comment-13768322
 ] 

Littlestar commented on LUCENE-5218:


java version "1.7.0_25"
I also build openjdk 7u40 with openjdk-7u40-fcs-src-b43-26_aug_2013.zip
two jdks has same problem.


> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-15 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768098#comment-13768098
 ] 

Littlestar edited comment on LUCENE-5218 at 9/16/13 6:24 AM:
-

PagedBytes.java#fillSlice
maybe wrong start??

{noformat} 
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat} 

  was (Author: cnstar9988):
PagedBytes.java#fillSlice
maybe wrong start??


public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-15 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768098#comment-13768098
 ] 

Littlestar commented on LUCENE-5218:


PagedBytes.java#fillSlice
maybe wrong start??


public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-15 Thread Littlestar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Littlestar updated LUCENE-5218:
---

Description: 
forceMerge(80)
==
Caused by: java.io.IOException: background merge hit exception: 
_3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
_16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
_dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
[maxNumSegments=80]
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
at 
com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
... 4 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
at 
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

===

  was:
forceMerge(80)
==
Caused by: java.io.IOException: background merge hit exception: 
_3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
_16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
_dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
[maxNumSegments=80]
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
at 
com.trs.hybase.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
... 4 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
at 
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

===


> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into

[jira] [Created] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-15 Thread Littlestar (JIRA)
Littlestar created LUCENE-5218:
--

 Summary: background merge hit exception && Caused by: 
java.lang.ArrayIndexOutOfBoundsException
 Key: LUCENE-5218
 URL: https://issues.apache.org/jira/browse/LUCENE-5218
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.4
 Environment: Linux MMapDirectory.
Reporter: Littlestar


forceMerge(80)
==
Caused by: java.io.IOException: background merge hit exception: 
_3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
_16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
_dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
[maxNumSegments=80]
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
at 
com.trs.hybase.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
... 4 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at 
org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
at 
org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
at 
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >