[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5678 - Failure!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5678/
Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
access denied ("java.sql.SQLPermission" "deregisterDriver")

Stack Trace:
java.security.AccessControlException: access denied ("java.sql.SQLPermission" 
"deregisterDriver")
at __randomizedtesting.SeedInfo.seed([756CA78971FEDC3E]:0)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:364)
at 
java.security.AccessController.checkPermission(AccessController.java:562)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.sql.DriverManager.deregisterDriver(DriverManager.java:399)
at 
org.apache.derby.jdbc.AutoloadedDriver.unregisterDriverModule(Unknown Source)
at org.apache.derby.jdbc.Driver20.stop(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.stop(Unknown 
Source)
at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown 
Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown 
Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown 
Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:661)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at 
org.apache.solr.handler.dataimport.AbstractDIHJdbcTestCase.afterClassDihJdbcTest(AbstractDIHJdbcTestCase.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 1) 
Thread[id=49, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 
   1) Thread[id=49, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.uti

Re: Please help me error search.IndexSearcher.(Ljava/lang/String;)V

2013-05-16 Thread Furkan KAMACI
Please ask that question at user mailing list.

2013/5/16 fifi 

> please,how I can solve this error?
>
> Exception in thread "main" java.lang.NoSuchMethodError:
> org.apache.lucene.search.IndexSearcher.(Ljava/lang/String;)V
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Please-help-me-error-search-IndexSearcher-init-Ljava-lang-String-V-tp4063698.html
> Sent from the Lucene - Java Developer mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Created] (LUCENE-5002) Deadlock in DocumentsWriterFlushControl

2013-05-16 Thread Sergiusz Urbaniak (JIRA)
Sergiusz Urbaniak created LUCENE-5002:
-

 Summary: Deadlock in DocumentsWriterFlushControl
 Key: LUCENE-5002
 URL: https://issues.apache.org/jira/browse/LUCENE-5002
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.3
 Environment: OpenJDK 64-Bit Server VM (23.7-b01 mixed mode)
Linux Ubuntu Server 12.04 LTS 64-Bit
Reporter: Sergiusz Urbaniak


Hi all,

We have an obvious deadlock between a "MaybeRefreshIndexJob" thread
calling ReferenceManager.maybeRefresh(ReferenceManager.java:204) and a
"RebuildIndexJob" thread calling
IndexWriter.deleteAll(IndexWriter.java:2065).

Lucene wants to flush in the "MaybeRefreshIndexJob" thread trying to 
intrinsically lock the IndexWriter instance at 
{{DocumentsWriterPerThread.java:563}} before notifyAll()ing the flush. 

Simultaneously the "RebuildIndexJob" thread who already intrinsically locked 
the IndexWriter instance at IndexWriter#deleteAll wait()s at 
{{DocumentsWriterFlushControl.java:245}} for the flush forever causing a 
deadlock.

{code}
"MaybeRefreshIndexJob Thread - 2" daemon prio=10 tid=0x7f8fe4006000 
nid=0x1ac2 waiting for monitor entry [0x7f8fa7bf7000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2223)
- waiting to lock <0xf1c00438> (a 
org.apache.lucene.index.IndexWriter)
at 
org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:563)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:533)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at 
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
- locked <0xf1c007d0> (a java.lang.Object)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
at 
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at 
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at 
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
at 
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
at jobs.MaybeRefreshIndexJob.timeout(MaybeRefreshIndexJob.java:47)

"RebuildIndexJob Thread - 1" prio=10 tid=0x7f903000a000 nid=0x1a38 in 
Object.wait() [0x7f9037dd6000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xf1c0c240> (a 
org.apache.lucene.index.DocumentsWriterFlushControl)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.lucene.index.DocumentsWriterFlushControl.waitForFlush(DocumentsWriterFlushControl.java:245)
- locked <0xf1c0c240> (a 
org.apache.lucene.index.DocumentsWriterFlushControl)
at 
org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:235)
- locked <0xf1c05370> (a 
org.apache.lucene.index.DocumentsWriter)
at org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2065)
- locked <0xf1c00438> (a org.apache.lucene.index.IndexWriter)
at jobs.RebuildIndexJob.buildIndex(RebuildIndexJob.java:102)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4827) fuzzy search problem

2013-05-16 Thread vishal parekh (JIRA)
vishal parekh created SOLR-4827:
---

 Summary: fuzzy search problem
 Key: SOLR-4827
 URL: https://issues.apache.org/jira/browse/SOLR-4827
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.3, 4.2
 Environment: OS - ubuntu
Server - Jboss 7 
Reporter: vishal parekh


I am periodically import/index records into solr server.

(1) so, suppose first, import 40 records, commited.
and then do fuzzy search on it, and works fine.

(2) import another 10 records, commited. fuzzy search works fine.

(3) import another 5 records, commited. now, when i do fuzzy search on
not these new records but above older records; it gives me lesser records then 
previous.

say after 1st import it gives me 3000 records (from 40) for any fuzzy 
search, now on same data it returns only 1000 records  (from 40) for same 
search.

above steps are just example, its not like after 3rd import only it cause this 
issue.

not sure, if size of index cause any problem or any other issue.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5002) Deadlock in DocumentsWriterFlushControl

2013-05-16 Thread Sergiusz Urbaniak (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659343#comment-13659343
 ] 

Sergiusz Urbaniak commented on LUCENE-5002:
---

Implementation note: we (obviously) use the same IndexWriter instance across 
all threads.

> Deadlock in DocumentsWriterFlushControl
> ---
>
> Key: LUCENE-5002
> URL: https://issues.apache.org/jira/browse/LUCENE-5002
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: OpenJDK 64-Bit Server VM (23.7-b01 mixed mode)
> Linux Ubuntu Server 12.04 LTS 64-Bit
>Reporter: Sergiusz Urbaniak
>
> Hi all,
> We have an obvious deadlock between a "MaybeRefreshIndexJob" thread
> calling ReferenceManager.maybeRefresh(ReferenceManager.java:204) and a
> "RebuildIndexJob" thread calling
> IndexWriter.deleteAll(IndexWriter.java:2065).
> Lucene wants to flush in the "MaybeRefreshIndexJob" thread trying to 
> intrinsically lock the IndexWriter instance at 
> {{DocumentsWriterPerThread.java:563}} before notifyAll()ing the flush. 
> Simultaneously the "RebuildIndexJob" thread who already intrinsically locked 
> the IndexWriter instance at IndexWriter#deleteAll wait()s at 
> {{DocumentsWriterFlushControl.java:245}} for the flush forever causing a 
> deadlock.
> {code}
> "MaybeRefreshIndexJob Thread - 2" daemon prio=10 tid=0x7f8fe4006000 
> nid=0x1ac2 waiting for monitor entry [0x7f8fa7bf7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2223)
>   - waiting to lock <0xf1c00438> (a 
> org.apache.lucene.index.IndexWriter)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:563)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:533)
>   at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
>   at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
>   at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
>   - locked <0xf1c007d0> (a java.lang.Object)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
>   at 
> org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
>   at 
> org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
>   at jobs.MaybeRefreshIndexJob.timeout(MaybeRefreshIndexJob.java:47)
> "RebuildIndexJob Thread - 1" prio=10 tid=0x7f903000a000 nid=0x1a38 in 
> Object.wait() [0x7f9037dd6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitForFlush(DocumentsWriterFlushControl.java:245)
>   - locked <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at 
> org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:235)
>   - locked <0xf1c05370> (a 
> org.apache.lucene.index.DocumentsWriter)
>   at org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2065)
>   - locked <0xf1c00438> (a org.apache.lucene.index.IndexWriter)
>   at jobs.RebuildIndexJob.buildIndex(RebuildIndexJob.java:102)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5678 - Failure!

2013-05-16 Thread Uwe Schindler
"deregisterDriver" is a new permission in Java 8 b89 (updated yesterday). We 
have to add it to our policy file.

Compare:
http://download.java.net/jdk8/docs/api/java/sql/SQLPermission.html
With:
http://docs.oracle.com/javase/7/docs/api/java/sql/SQLPermission.html

I'll take care.
Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> Sent: Thursday, May 16, 2013 9:27 AM
> To: dev@lucene.apache.org
> Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) - Build #
> 5678 - Failure!
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5678/
> Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops -
> XX:+UseConcMarkSweepGC
> 
> 3 tests failed.
> FAILED:
> junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntity
> ProcessorDelta
> 
> Error Message:
> access denied ("java.sql.SQLPermission" "deregisterDriver")
> 
> Stack Trace:
> java.security.AccessControlException: access denied
> ("java.sql.SQLPermission" "deregisterDriver")
>   at __randomizedtesting.SeedInfo.seed([756CA78971FEDC3E]:0)
>   at
> java.security.AccessControlContext.checkPermission(AccessControlContext.j
> ava:364)
>   at
> java.security.AccessController.checkPermission(AccessController.java:562)
>   at
> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
>   at java.sql.DriverManager.deregisterDriver(DriverManager.java:399)
>   at
> org.apache.derby.jdbc.AutoloadedDriver.unregisterDriverModule(Unknown
> Source)
>   at org.apache.derby.jdbc.Driver20.stop(Unknown Source)
>   at
> org.apache.derby.impl.services.monitor.TopService.stop(Unknown Source)
>   at
> org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
> Source)
>   at
> org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown
> Source)
>   at
> org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown
> Source)
>   at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
>   at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown
> Source)
>   at java.sql.DriverManager.getConnection(DriverManager.java:661)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at
> org.apache.solr.handler.dataimport.AbstractDIHJdbcTestCase.afterClassDihJ
> dbcTest(AbstractDIHJdbcTestCase.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:491)
>   at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
> dRunner.java:1559)
>   at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(Rando
> mizedRunner.java:79)
>   at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando
> mizedRunner.java:700)
>   at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
>   at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.
> evaluate(SystemPropertiesRestoreRule.java:53)
>   at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA
> fterRule.java:46)
>   at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCl
> assName.java:42)
>   at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1
> .evaluate(SystemPropertiesInvariantRule.java:55)
>   at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet
> hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet
> hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
>   at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss
> ertionsRequired.java:43)
>   at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure
> .java:48)
>   at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule
> IgnoreAfterMaxFailures.java:70)
>   at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnore
> TestSuites.java:55)
>   at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
>   at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.
> run(ThreadLeakControl.java:358)
>   at java.lang.Thread.run(Thread.java:724)
> 
> 
> FAILED:
> junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntity
> ProcessorDelta
> 
> Error Message:
> 1 thread leaked from SUITE scope at
> org

RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5678 - Failure!

2013-05-16 Thread Uwe Schindler
This is the related commit causing this:
http://hg.openjdk.java.net/jdk8/build/jdk/rev/ac3e189c9099

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Uwe Schindler [mailto:u...@thetaphi.de]
> Sent: Thursday, May 16, 2013 9:59 AM
> To: 'dev@lucene.apache.org'
> Subject: RE: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) -
> Build # 5678 - Failure!
> 
> "deregisterDriver" is a new permission in Java 8 b89 (updated yesterday). We
> have to add it to our policy file.
> 
> Compare:
> http://download.java.net/jdk8/docs/api/java/sql/SQLPermission.html
> With:
> http://docs.oracle.com/javase/7/docs/api/java/sql/SQLPermission.html
> 
> I'll take care.
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
> > -Original Message-
> > From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> > Sent: Thursday, May 16, 2013 9:27 AM
> > To: dev@lucene.apache.org
> > Subject: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) -
> > Build #
> > 5678 - Failure!
> >
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5678/
> > Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops -
> > XX:+UseConcMarkSweepGC
> >
> > 3 tests failed.
> > FAILED:
> > junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEn
> > tity
> > ProcessorDelta
> >
> > Error Message:
> > access denied ("java.sql.SQLPermission" "deregisterDriver")
> >
> > Stack Trace:
> > java.security.AccessControlException: access denied
> > ("java.sql.SQLPermission" "deregisterDriver")
> > at __randomizedtesting.SeedInfo.seed([756CA78971FEDC3E]:0)
> > at
> > java.security.AccessControlContext.checkPermission(AccessControlContex
> > t.j
> > ava:364)
> > at
> > java.security.AccessController.checkPermission(AccessController.java:562)
> > at
> > java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> > at java.sql.DriverManager.deregisterDriver(DriverManager.java:399)
> > at
> >
> org.apache.derby.jdbc.AutoloadedDriver.unregisterDriverModule(Unknown
> > Source)
> > at org.apache.derby.jdbc.Driver20.stop(Unknown Source)
> > at
> > org.apache.derby.impl.services.monitor.TopService.stop(Unknown
> Source)
> > at
> > org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
> > Source)
> > at
> > org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown
> > Source)
> > at
> > org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown
> > Source)
> > at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
> > at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown
> > Source)
> > at java.sql.DriverManager.getConnection(DriverManager.java:661)
> > at java.sql.DriverManager.getConnection(DriverManager.java:270)
> > at
> > org.apache.solr.handler.dataimport.AbstractDIHJdbcTestCase.afterClassD
> > ihJ
> > dbcTest(AbstractDIHJdbcTestCase.java:77)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> > ava:57)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> > sorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:491)
> > at
> >
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize
> > dRunner.java:1559)
> > at
> >
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(Rando
> > mizedRunner.java:79)
> > at
> >
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando
> > mizedRunner.java:700)
> > at
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta
> > t
> > ementAdapter.java:36)
> > at
> >
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.
> > evaluate(SystemPropertiesRestoreRule.java:53)
> > at
> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBefo
> > reA
> > fterRule.java:46)
> > at
> > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStore
> > Cl
> > assName.java:42)
> > at
> > com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule
> > $1
> > .evaluate(SystemPropertiesInvariantRule.java:55)
> > at
> >
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet
> > hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> > at
> >
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet
> > hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> > at
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta
> > t
> > ementAdapter.java:36)
> > at
> > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleA
> > ss
> > ertionsRequired.java:43)
> > at
> > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFa

Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5616 - Failure!

2013-05-16 Thread Shalin Shekhar Mangar
The above seeds result in the following failure consistently on Windows 8
6.2 amd64/Oracle Corporation 1.7.0_17 (64-bit)/cpus=8,threads=1

ant test  -Dtestcase=TestSqlEntityProcessor -Dtests.seed=18C23A83BF7C726C
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ar_SA
-Dtests.timezone=Etc/UTC -Dtests.file.encoding=ISO-8859-1

[junit4:junit4] Started J0 PID(5832@shalin-desktop).
[junit4:junit4] Suite:
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
[junit4:junit4]   2> log4j:WARN No appenders could be found for logger
(org.apache.solr.SolrTestCaseJ4).
[junit4:junit4]   2> log4j:WARN Please initialize the log4j system properly.
[junit4:junit4]   2> log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[junit4:junit4]   2> Creating dataDir:
C:\work\oss\branch_4x\solr\build\contrib\solr-dataimporthandler\test\J0\.\solrtest-TestSqlEntityProcessorDelta-1368691492059
[junit4:junit4] OK  4.21s |
TestSqlEntityProcessorDelta.testWithSimpleTransformer
[junit4:junit4] OK  0.40s | TestSqlEntityProcessorDelta.testSingleEntity
[junit4:junit4] OK  1.23s |
TestSqlEntityProcessorDelta.testWithComplexTransformer
[junit4:junit4] HEARTBEAT J0 PID(5832@shalin-desktop): 2013-05-16T13:36:12,
stalled for 71.6s at: TestSqlEntityProcessorDelta.testChildEntities
[junit4:junit4] HEARTBEAT J0 PID(5832@shalin-desktop): 2013-05-16T13:37:12,
stalled for  132s at: TestSqlEntityProcessorDelta.testChildEntities
[junit4:junit4] OK  0.18s |
TestSqlEntityProcessorDelta.testChildEntities
[junit4:junit4]   2> NOTE: test params are: codec=Lucene42:
{timestamp=PostingsFormat(name=Memory doPackFST= false),
id=MockFixedIntBlock(blockSize=1812),
COUNTRY_CODES_mult_s=PostingsFormat(name=Memory doPackFST= false),
AddAColumn_s=PostingsFormat(name=Memory doPackFST= false),
countryAdded_s=PostingsFormat(name=Memory doPackFST= false),
COUNTRY_CODE_s=PostingsFormat(name=Memory doPackFST= false),
NAME_mult_s=PostingsFormat(name=Direct),
COUNTRY_NAME_s=PostingsFormat(name=Direct)}, docValues:{},
sim=DefaultSimilarity, locale=iw, timezone=Pacific/Samoa
[junit4:junit4]   2> NOTE: Windows 8 6.2 amd64/Oracle Corporation 1.7.0_17
(64-bit)/cpus=8,threads=1,free=386020992,total=513998848
[junit4:junit4]   2> NOTE: All tests run in this JVM:
[TestSqlEntityProcessorDelta]
[junit4:junit4]   2> NOTE: reproduce with: ant test
 -Dtestcase=TestSqlEntityProcessorDelta -Dtests.seed=756CA78971FEDC3E
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=iw
-Dtests.timezone=Pacific/Samoa -Dtests.file.encoding=ISO-8859-1
[junit4:junit4] ERROR   0.00s | TestSqlEntityProcessorDelta (suite) <<<
[junit4:junit4]> Throwable #1: java.lang.AssertionError: ERROR:
SolrIndexSearcher opens=13 closes=0
[junit4:junit4]>at
__randomizedtesting.SeedInfo.seed([756CA78971FEDC3E]:0)
[junit4:junit4]>at
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:252)
[junit4:junit4]>at
org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:101)
[junit4:junit4]>at java.lang.Thread.run(Thread.java:722)
[junit4:junit4] Completed in 144.51s, 4 tests, 1 failure <<< FAILURES!
[junit4:junit4]
[junit4:junit4]
[junit4:junit4] Tests with failures:
[junit4:junit4]   -
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta (suite)
[junit4:junit4]



On Thu, May 16, 2013 at 12:17 PM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5616/
> Java: 64bit/jdk1.8.0-ea-b89 -XX:+UseCompressedOops -XX:+UseSerialGC
>
> 3 tests failed.
> FAILED:
>  
> junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessor
>
> Error Message:
> access denied ("java.sql.SQLPermission" "deregisterDriver")
>
> Stack Trace:
> java.security.AccessControlException: access denied
> ("java.sql.SQLPermission" "deregisterDriver")
> at __randomizedtesting.SeedInfo.seed([18C23A83BF7C726C]:0)
> at
> java.security.AccessControlContext.checkPermission(AccessControlContext.java:364)
> at
> java.security.AccessController.checkPermission(AccessController.java:562)
> at
> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> at java.sql.DriverManager.deregisterDriver(DriverManager.java:399)
> at
> org.apache.derby.jdbc.AutoloadedDriver.unregisterDriverModule(Unknown
> Source)
> at org.apache.derby.jdbc.Driver20.stop(Unknown Source)
> at org.apache.derby.impl.services.monitor.TopService.stop(Unknown
> Source)
> at
> org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown Source)
> at
> org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown Source)
> at
> org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown Source)
> at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
> at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
> at java.sql.DriverManager.get

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) - Build # 5617 - Still Failing!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5617/
Java: 32bit/jdk1.8.0-ea-b89 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
access denied ("java.sql.SQLPermission" "deregisterDriver")

Stack Trace:
java.security.AccessControlException: access denied ("java.sql.SQLPermission" 
"deregisterDriver")
at __randomizedtesting.SeedInfo.seed([E8696972B2789895]:0)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:364)
at 
java.security.AccessController.checkPermission(AccessController.java:562)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.sql.DriverManager.deregisterDriver(DriverManager.java:399)
at 
org.apache.derby.jdbc.AutoloadedDriver.unregisterDriverModule(Unknown Source)
at org.apache.derby.jdbc.Driver20.stop(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.stop(Unknown 
Source)
at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown 
Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown 
Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(Unknown 
Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:661)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at 
org.apache.solr.handler.dataimport.AbstractDIHJdbcTestCase.afterClassDihJdbcTest(AbstractDIHJdbcTestCase.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:700)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 1) 
Thread[id=34, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 
   1) Thread[id=34, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:5

[jira] [Commented] (SOLR-4826) TikaException Parsing PPTX file

2013-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659348#comment-13659348
 ] 

Jan Høydahl commented on SOLR-4826:
---

Please report this bug in the TIKA project http://tika.apache.org/ and attach a 
sample file which triggers the problem. (First, check if it is already reported 
or perhaps fixed in a newer version.)

> TikaException Parsing PPTX file
> ---
>
> Key: SOLR-4826
> URL: https://issues.apache.org/jira/browse/SOLR-4826
> Project: Solr
>  Issue Type: Bug
>Reporter: Thomas Weidman
>
> Error parsing PPTX file:
> org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
> org.apache.solr.common.SolrException: 
> org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
>   at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:225)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>   at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>   at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
>   at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>   at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>   at java.lang.Thread.run(Thread.java:619)
> Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
> IOException from org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
>   at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:248)
>   at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
>   at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
>   at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:219)
>   ... 19 more
> Caused by: java.io.IOException: Unable to read entire header; 0 bytes read; 
> expected 512 bytes
>   at 
> org.apache.poi.poifs.storage.HeaderBlock.alertShortRead(HeaderBlock.java:226)
>   at 
> org.apache.poi.poifs.storage.HeaderBlock.readFirst512(HeaderBlock.java:207)
>   at 
> org.apache.poi.poifs.storage.HeaderBlock.(HeaderBlock.java:104)
>   at 
> org.apache.poi.poifs.filesystem.POIFSFileSystem.(POIFSFileSystem.java:138)
>   at 
> org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedOLE(AbstractOOXMLExtractor.java:149)
>   at 
> org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedParts(AbstractOOXMLExtractor.java:129)
>   at 
> org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.getXHTML(AbstractOOXMLExtractor.java:107)
>   at 
> org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:112)
>   at 
> org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:82)
>   at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
>   ... 22 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@luc

RE: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5616 - Failure!

2013-05-16 Thread Uwe Schindler
This has nothing to do with the Java 8 permission exceptions? How is this 
related?

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com] 
Sent: Thursday, May 16, 2013 10:09 AM
To: dev@lucene.apache.org
Subject: Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) - Build # 
5616 - Failure!

 

The above seeds result in the following failure consistently on Windows 8 6.2 
amd64/Oracle Corporation 1.7.0_17 (64-bit)/cpus=8,threads=1

 

ant test  -Dtestcase=TestSqlEntityProcessor -Dtests.seed=18C23A83BF7C726C 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ar_SA 
-Dtests.timezone=Etc/UTC -Dtests.file.encoding=ISO-8859-1

 

[junit4:junit4] Started J0 PID(5832@shalin-desktop).

[junit4:junit4] Suite: 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

[junit4:junit4]   2> log4j:WARN No appenders could be found for logger 
(org.apache.solr.SolrTestCaseJ4).

[junit4:junit4]   2> log4j:WARN Please initialize the log4j system properly.

[junit4:junit4]   2> log4j:WARN See 
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

[junit4:junit4]   2> Creating dataDir: 
C:\work\oss\branch_4x\solr\build\contrib\solr-dataimporthandler\test\J0\.\solrtest-TestSqlEntityProcessorDelta-1368691492059

[junit4:junit4] OK  4.21s | 
TestSqlEntityProcessorDelta.testWithSimpleTransformer

[junit4:junit4] OK  0.40s | TestSqlEntityProcessorDelta.testSingleEntity

[junit4:junit4] OK  1.23s | 
TestSqlEntityProcessorDelta.testWithComplexTransformer

[junit4:junit4] HEARTBEAT J0 PID(5832@shalin-desktop): 2013-05-16T13:36:12, 
stalled for 71.6s at: TestSqlEntityProcessorDelta.testChildEntities

[junit4:junit4] HEARTBEAT J0 PID(5832@shalin-desktop): 2013-05-16T13:37:12, 
stalled for  132s at: TestSqlEntityProcessorDelta.testChildEntities

[junit4:junit4] OK  0.18s | TestSqlEntityProcessorDelta.testChildEntities

[junit4:junit4]   2> NOTE: test params are: codec=Lucene42: 
{timestamp=PostingsFormat(name=Memory doPackFST= false), 
id=MockFixedIntBlock(blockSize=1812), 
COUNTRY_CODES_mult_s=PostingsFormat(name=Memory doPackFST= false), 
AddAColumn_s=PostingsFormat(name=Memory doPackFST= false), 
countryAdded_s=PostingsFormat(name=Memory doPackFST= false), 
COUNTRY_CODE_s=PostingsFormat(name=Memory doPackFST= false), 
NAME_mult_s=PostingsFormat(name=Direct), 
COUNTRY_NAME_s=PostingsFormat(name=Direct)}, docValues:{}, 
sim=DefaultSimilarity, locale=iw, timezone=Pacific/Samoa

[junit4:junit4]   2> NOTE: Windows 8 6.2 amd64/Oracle Corporation 1.7.0_17 
(64-bit)/cpus=8,threads=1,free=386020992,total=513998848

[junit4:junit4]   2> NOTE: All tests run in this JVM: 
[TestSqlEntityProcessorDelta]

[junit4:junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestSqlEntityProcessorDelta -Dtests.seed=756CA78971FEDC3E 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=iw 
-Dtests.timezone=Pacific/Samoa -Dtests.file.encoding=ISO-8859-1

[junit4:junit4] ERROR   0.00s | TestSqlEntityProcessorDelta (suite) <<<

[junit4:junit4]> Throwable #1: java.lang.AssertionError: ERROR: 
SolrIndexSearcher opens=13 closes=0

[junit4:junit4]>at 
__randomizedtesting.SeedInfo.seed([756CA78971FEDC3E]:0)

[junit4:junit4]>at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:252)

[junit4:junit4]>at 
org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:101)

[junit4:junit4]>at java.lang.Thread.run(Thread.java:722)

[junit4:junit4] Completed in 144.51s, 4 tests, 1 failure <<< FAILURES!

[junit4:junit4]

[junit4:junit4]

[junit4:junit4] Tests with failures:

[junit4:junit4]   - 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta (suite)

[junit4:junit4]

 

 

On Thu, May 16, 2013 at 12:17 PM, Policeman Jenkins Server 
 wrote:

Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5616/
Java: 64bit/jdk1.8.0-ea-b89 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessor

Error Message:
access denied ("java.sql.SQLPermission" "deregisterDriver")

Stack Trace:
java.security.AccessControlException: access denied ("java.sql.SQLPermission" 
"deregisterDriver")
at __randomizedtesting.SeedInfo.seed([18C23A83BF7C726C]:0)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:364)
at 
java.security.AccessController.checkPermission(AccessController.java:562)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.sql.DriverManager.deregisterDriver(DriverManager.java:399)
at 
org.apache.derby.jdbc.AutoloadedDriver.unregisterDriverModule(Unknown Source)
at org.apache.derby.jdbc.Driver20.stop(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.stop(Unknown 
Source)
at 

Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5616 - Failure!

2013-05-16 Thread Shalin Shekhar Mangar
I'm sorry I didn't mean to imply that they are related. Just that I can
reproduce a failure consistently but it is not being caught by jenkins.
I'll investigate.


On Thu, May 16, 2013 at 1:41 PM, Uwe Schindler  wrote:

> This has nothing to do with the Java 8 permission exceptions? How is this
> related?
>
> ** **
>
> -
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
> ** **
>
> *From:* Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
> *Sent:* Thursday, May 16, 2013 10:09 AM
> *To:* dev@lucene.apache.org
> *Subject:* Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) -
> Build # 5616 - Failure!
>
> ** **
>
> The above seeds result in the following failure consistently on Windows 8
> 6.2 amd64/Oracle Corporation 1.7.0_17 (64-bit)/cpus=8,threads=1
>
> ** **
>
> ant test  -Dtestcase=TestSqlEntityProcessor -Dtests.seed=18C23A83BF7C726C
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ar_SA
> -Dtests.timezone=Etc/UTC -Dtests.file.encoding=ISO-8859-1
>
> ** **
>
> [junit4:junit4] Started J0 PID(5832@shalin-desktop).
>
> [junit4:junit4] Suite:
> org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
>
> [junit4:junit4]   2> log4j:WARN No appenders could be found for logger
> (org.apache.solr.SolrTestCaseJ4).
>
> [junit4:junit4]   2> log4j:WARN Please initialize the log4j system
> properly.
>
> [junit4:junit4]   2> log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>
> [junit4:junit4]   2> Creating dataDir:
> C:\work\oss\branch_4x\solr\build\contrib\solr-dataimporthandler\test\J0\.\solrtest-TestSqlEntityProcessorDelta-1368691492059
> 
>
> [junit4:junit4] OK  4.21s |
> TestSqlEntityProcessorDelta.testWithSimpleTransformer
>
> [junit4:junit4] OK  0.40s |
> TestSqlEntityProcessorDelta.testSingleEntity
>
> [junit4:junit4] OK  1.23s |
> TestSqlEntityProcessorDelta.testWithComplexTransformer
>
> [junit4:junit4] HEARTBEAT J0 PID(5832@shalin-desktop):
> 2013-05-16T13:36:12, stalled for 71.6s at:
> TestSqlEntityProcessorDelta.testChildEntities
>
> [junit4:junit4] HEARTBEAT J0 PID(5832@shalin-desktop):
> 2013-05-16T13:37:12, stalled for  132s at:
> TestSqlEntityProcessorDelta.testChildEntities
>
> [junit4:junit4] OK  0.18s |
> TestSqlEntityProcessorDelta.testChildEntities
>
> [junit4:junit4]   2> NOTE: test params are: codec=Lucene42:
> {timestamp=PostingsFormat(name=Memory doPackFST= false),
> id=MockFixedIntBlock(blockSize=1812),
> COUNTRY_CODES_mult_s=PostingsFormat(name=Memory doPackFST= false),
> AddAColumn_s=PostingsFormat(name=Memory doPackFST= false),
> countryAdded_s=PostingsFormat(name=Memory doPackFST= false),
> COUNTRY_CODE_s=PostingsFormat(name=Memory doPackFST= false),
> NAME_mult_s=PostingsFormat(name=Direct),
> COUNTRY_NAME_s=PostingsFormat(name=Direct)}, docValues:{},
> sim=DefaultSimilarity, locale=iw, timezone=Pacific/Samoa
>
> [junit4:junit4]   2> NOTE: Windows 8 6.2 amd64/Oracle Corporation 1.7.0_17
> (64-bit)/cpus=8,threads=1,free=386020992,total=513998848
>
> [junit4:junit4]   2> NOTE: All tests run in this JVM:
> [TestSqlEntityProcessorDelta]
>
> [junit4:junit4]   2> NOTE: reproduce with: ant test
>  -Dtestcase=TestSqlEntityProcessorDelta -Dtests.seed=756CA78971FEDC3E
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=iw
> -Dtests.timezone=Pacific/Samoa -Dtests.file.encoding=ISO-8859-1
>
> [junit4:junit4] ERROR   0.00s | TestSqlEntityProcessorDelta (suite) <<<***
> *
>
> [junit4:junit4]> Throwable #1: java.lang.AssertionError: ERROR:
> SolrIndexSearcher opens=13 closes=0
>
> [junit4:junit4]>at
> __randomizedtesting.SeedInfo.seed([756CA78971FEDC3E]:0)
>
> [junit4:junit4]>at
> org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:252)
> 
>
> [junit4:junit4]>at
> org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:101)
>
> [junit4:junit4]>at java.lang.Thread.run(Thread.java:722)
>
> [junit4:junit4] Completed in 144.51s, 4 tests, 1 failure <<< FAILURES!
>
> [junit4:junit4]
>
> [junit4:junit4]
>
> [junit4:junit4] Tests with failures:
>
> [junit4:junit4]   -
> org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta (suite)
>
> [junit4:junit4]
>
> ** **
>
> ** **
>
> On Thu, May 16, 2013 at 12:17 PM, Policeman Jenkins Server <
> jenk...@thetaphi.de> wrote:
>
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5616/
> Java: 64bit/jdk1.8.0-ea-b89 -XX:+UseCompressedOops -XX:+UseSerialGC
>
> 3 tests failed.
> FAILED:
>  
> junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessor
>
> Error Message:
> access denied ("java.sql.SQLPermission" "deregisterDriver")
>
> Stack Trace:
> java.security.AccessControlException: access denied
> ("java.sql.SQLPermission" "deregisterDriver")
> at __randomizedtest

[jira] [Assigned] (LUCENE-5002) Deadlock in DocumentsWriterFlushControl

2013-05-16 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-5002:
---

Assignee: Simon Willnauer

> Deadlock in DocumentsWriterFlushControl
> ---
>
> Key: LUCENE-5002
> URL: https://issues.apache.org/jira/browse/LUCENE-5002
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: OpenJDK 64-Bit Server VM (23.7-b01 mixed mode)
> Linux Ubuntu Server 12.04 LTS 64-Bit
>Reporter: Sergiusz Urbaniak
>Assignee: Simon Willnauer
>
> Hi all,
> We have an obvious deadlock between a "MaybeRefreshIndexJob" thread
> calling ReferenceManager.maybeRefresh(ReferenceManager.java:204) and a
> "RebuildIndexJob" thread calling
> IndexWriter.deleteAll(IndexWriter.java:2065).
> Lucene wants to flush in the "MaybeRefreshIndexJob" thread trying to 
> intrinsically lock the IndexWriter instance at 
> {{DocumentsWriterPerThread.java:563}} before notifyAll()ing the flush. 
> Simultaneously the "RebuildIndexJob" thread who already intrinsically locked 
> the IndexWriter instance at IndexWriter#deleteAll wait()s at 
> {{DocumentsWriterFlushControl.java:245}} for the flush forever causing a 
> deadlock.
> {code}
> "MaybeRefreshIndexJob Thread - 2" daemon prio=10 tid=0x7f8fe4006000 
> nid=0x1ac2 waiting for monitor entry [0x7f8fa7bf7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2223)
>   - waiting to lock <0xf1c00438> (a 
> org.apache.lucene.index.IndexWriter)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:563)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:533)
>   at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
>   at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
>   at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
>   - locked <0xf1c007d0> (a java.lang.Object)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
>   at 
> org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
>   at 
> org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
>   at jobs.MaybeRefreshIndexJob.timeout(MaybeRefreshIndexJob.java:47)
> "RebuildIndexJob Thread - 1" prio=10 tid=0x7f903000a000 nid=0x1a38 in 
> Object.wait() [0x7f9037dd6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitForFlush(DocumentsWriterFlushControl.java:245)
>   - locked <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at 
> org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:235)
>   - locked <0xf1c05370> (a 
> org.apache.lucene.index.DocumentsWriter)
>   at org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2065)
>   - locked <0xf1c00438> (a org.apache.lucene.index.IndexWriter)
>   at jobs.RebuildIndexJob.buildIndex(RebuildIndexJob.java:102)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5002) Deadlock in DocumentsWriterFlushControl

2013-05-16 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5002:


Attachment: LUCENE-5002_test.patch

here is a patch that has a test that hangs. Pretty straight forward though. 
Yet, the problem is that we are locking on the index writer in DWPT. Or on the 
other hand there are too many synch blocks in IW to make it safe to call into 
IW from DWPT. 

I need to look into that more closely to figure out how to fix that.

> Deadlock in DocumentsWriterFlushControl
> ---
>
> Key: LUCENE-5002
> URL: https://issues.apache.org/jira/browse/LUCENE-5002
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: OpenJDK 64-Bit Server VM (23.7-b01 mixed mode)
> Linux Ubuntu Server 12.04 LTS 64-Bit
>Reporter: Sergiusz Urbaniak
>Assignee: Simon Willnauer
> Attachments: LUCENE-5002_test.patch
>
>
> Hi all,
> We have an obvious deadlock between a "MaybeRefreshIndexJob" thread
> calling ReferenceManager.maybeRefresh(ReferenceManager.java:204) and a
> "RebuildIndexJob" thread calling
> IndexWriter.deleteAll(IndexWriter.java:2065).
> Lucene wants to flush in the "MaybeRefreshIndexJob" thread trying to 
> intrinsically lock the IndexWriter instance at 
> {{DocumentsWriterPerThread.java:563}} before notifyAll()ing the flush. 
> Simultaneously the "RebuildIndexJob" thread who already intrinsically locked 
> the IndexWriter instance at IndexWriter#deleteAll wait()s at 
> {{DocumentsWriterFlushControl.java:245}} for the flush forever causing a 
> deadlock.
> {code}
> "MaybeRefreshIndexJob Thread - 2" daemon prio=10 tid=0x7f8fe4006000 
> nid=0x1ac2 waiting for monitor entry [0x7f8fa7bf7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2223)
>   - waiting to lock <0xf1c00438> (a 
> org.apache.lucene.index.IndexWriter)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:563)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:533)
>   at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
>   at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
>   at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
>   - locked <0xf1c007d0> (a java.lang.Object)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
>   at 
> org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
>   at 
> org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
>   at jobs.MaybeRefreshIndexJob.timeout(MaybeRefreshIndexJob.java:47)
> "RebuildIndexJob Thread - 1" prio=10 tid=0x7f903000a000 nid=0x1a38 in 
> Object.wait() [0x7f9037dd6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitForFlush(DocumentsWriterFlushControl.java:245)
>   - locked <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at 
> org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:235)
>   - locked <0xf1c05370> (a 
> org.apache.lucene.index.DocumentsWriter)
>   at org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2065)
>   - locked <0xf1c00438> (a org.apache.lucene.index.IndexWriter)
>   at jobs.RebuildIndexJob.buildIndex(RebuildIndexJob.java:102)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4828) I need to analyse and extract semantics of the input query based on which I want to provide filters and to the query and submit it to solr for relevant results.

2013-05-16 Thread Neha Yadav (JIRA)
Neha Yadav created SOLR-4828:


 Summary: I need to analyse and extract semantics of the input 
query based on which I want to provide filters and to the query and submit it 
to solr for relevant results.
 Key: SOLR-4828
 URL: https://issues.apache.org/jira/browse/SOLR-4828
 Project: Solr
  Issue Type: Wish
Reporter: Neha Yadav


I need to analyse and extract semantics of the input query based on which I 
want to provide filters and to the query and submit it to solr for relevant 
results. PLease can anyone help me and provide directions how to proceed. I 
just wish to do the configurations in the solrconfig.xml file so that it can be 
scaled easily for further versions too. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4935) CustomScoreQuery has broken boosting

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4935:
--

Fix Version/s: (was: 4.4)
   4.3.1

> CustomScoreQuery has broken boosting
> 
>
> Key: LUCENE-4935
> URL: https://issues.apache.org/jira/browse/LUCENE-4935
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Robert Muir
>  Labels: lucene-4.3.1-candidate
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4935.patch, LUCENE-4935.patch
>
>
> CustomScoreQuery wrongly applies boost^2 instead of boost.
> It wrongly incorporates its boost into the normalization factor passed down 
> to subquery (like booleanquery does) and *also* multiplies it directly in its 
> scorer.
> The only reason the test passes today is because it compares raw score 
> magnitudes when querynorm is on, which normalizes this away.
> Changing the test to use newSearcher() demonstrates the brokenness.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4935) CustomScoreQuery has broken boosting

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4935:
--

Labels:   (was: lucene-4.3.1-candidate)

> CustomScoreQuery has broken boosting
> 
>
> Key: LUCENE-4935
> URL: https://issues.apache.org/jira/browse/LUCENE-4935
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Robert Muir
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4935.patch, LUCENE-4935.patch
>
>
> CustomScoreQuery wrongly applies boost^2 instead of boost.
> It wrongly incorporates its boost into the normalization factor passed down 
> to subquery (like booleanquery does) and *also* multiplies it directly in its 
> scorer.
> The only reason the test passes today is because it compares raw score 
> magnitudes when querynorm is on, which normalizes this away.
> Changing the test to use newSearcher() demonstrates the brokenness.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4823) Split LBHttpSolrServer into two classes one for the solrj use case and one for the solr cloud use case

2013-05-16 Thread philip hoy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659365#comment-13659365
 ] 

philip hoy commented on SOLR-4823:
--

The load balancer does indeed use round robin to pick a shard replica to 
forward the request on to. However it does not do the clever stuff to pick out 
what shards are possible candidates for a particular query. That role is 
fulfilled by the org.apache.solr.handler.component.HttpShardHandler.

> Split LBHttpSolrServer into two classes one for the solrj use case and one 
> for the solr cloud use case
> --
>
> Key: SOLR-4823
> URL: https://issues.apache.org/jira/browse/SOLR-4823
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: philip hoy
>Priority: Minor
>
> The LBHttpSolrServer has too many responsibilities. It could perhaps be 
> broken into two classes, one in solrj to be used in the place of an external 
> load balancer that balances across a known set of solr servers defined at 
> construction time and one in solr core to be used by the solr cloud 
> components that balances across servers dependant on the request.
> To save code duplication, if much arises an abstract bass class could be 
> introduced in to solrj.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3422) IndeIndexWriter.optimize() throws FileNotFoundException and IOException

2013-05-16 Thread l0co (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659366#comment-13659366
 ] 

l0co commented on LUCENE-3422:
--

ext3, I'm gonna check if there possible to have more IndexWriter-s to the same 
dir opened

> IndeIndexWriter.optimize() throws FileNotFoundException and IOException
> ---
>
> Key: LUCENE-3422
> URL: https://issues.apache.org/jira/browse/LUCENE-3422
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Elizabeth Nisha
>
> I am using lucene 3.0.2 search APIs for my application. 
> Indexed data is about 350MB and time taken for indexing is 25 hrs. Search 
> indexing and Optimization runs in two different threads. Optimization runs 
> for every 1 hour and it doesn't run while indexing is going on and vice 
> versa. When optimization is going on using IndexWriter.optimize(), 
> FileNotFoundException and IOException are seen in my log and the index file 
> is getting corrupted, log says
> 1. java.io.IOException: No sub-file with id _5r8.fdt found 
> [The file name in this message changes over time (_5r8.fdt, _6fa.fdt, 
> _6uh.fdt, ..., _emv.fdt) ]
> 2. java.io.FileNotFoundException: 
> /local/groups/necim/index_5.3/index/_bdx.cfs (No such file or directory)  
> 3. java.io.FileNotFoundException: 
> /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
>   Stack trace: java.io.IOException: background merge hit exception: 
> _hkp:c100->_hkp _hkq:c100->_hkp _hkr:c100->_hkr _hks:c100->_hkr _hxb:c5500 
> _hx5:c1000 _hxc:c198
> 84 into _hxd [optimize] [mergeDocStores]
>at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2359)
>at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2298)
>at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2268)
>at com.telelogic.cs.search.SearchIndex.doOptimize(SearchIndex.java:130)
>at 
> com.telelogic.cs.search.SearchIndexerThread$1.run(SearchIndexerThread.java:337)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.FileNotFoundException: 
> /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
>at java.io.RandomAccessFile.open(Native Method)
>at java.io.RandomAccessFile.(RandomAccessFile.java:212)
>at 
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.(SimpleFSDirectory.java:76)
>at 
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:97)
>at 
> org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.(NIOFSDirectory.java:87)
>at 
> org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
>at 
> org.apache.lucene.index.CompoundFileReader.(CompoundFileReader.java:67)
>at 
> org.apache.lucene.index.SegmentReader$CoreReaders.(SegmentReader.java:114)
>at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
>at 
> org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
>at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
>at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4948) Stink bug in PostingsHighlighter

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4948:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Backported to 4.3.1 r1483258

> Stink bug in PostingsHighlighter
> 
>
> Key: LUCENE-4948
> URL: https://issues.apache.org/jira/browse/LUCENE-4948
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Reporter: Michael McCandless
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4948.patch, LUCENE-4948.patch
>
>
> This test fail reproduces on IBM J9:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=TestPostingsHighlighter 
> -Dtests.method=testCambridgeMA -Dtests.seed=2A9A93DAC39E0938 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=es_HN 
> -Dtests.timezone=America/Yellowknife -Dtests.file.encoding=UTF-8
> {noformat}
> {noformat}
> Stack Trace:
> java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 37
> at 
> __randomizedtesting.SeedInfo.seed([2A9A93DAC39E0938:AB8FF071AD305139]:0)
> at 
> org.apache.lucene.search.postingshighlight.Passage.addMatch(Passage.java:53)
> at 
> org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightDoc(PostingsHighlighter.java:547)
> at 
> org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightField(PostingsHighlighter.java:425)
> at 
> org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:364)
> at 
> org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlightFields(PostingsHighlighter.java:268)
> at 
> org.apache.lucene.search.postingshighlight.PostingsHighlighter.highlight(PostingsHighlighter.java:198)
> at 
> org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testCambridgeMA(TestPostingsHighlighter.java:373)
> {noformat}
> I think it's because J9 grows arrays in a different progression than other 
> JVMs ... we should fix PostingsHighlighter to forcefully grow the arrays to 
> the same length instead of this:
> {noformat}
> if (numMatches == matchStarts.length) {
>   matchStarts = ArrayUtil.grow(matchStarts, numMatches+1);
>   matchEnds = ArrayUtil.grow(matchEnds, numMatches+1);
>   BytesRef newMatchTerms[] = new 
> BytesRef[ArrayUtil.oversize(numMatches+1, 
> RamUsageEstimator.NUM_BYTES_OBJECT_REF)];
>   System.arraycopy(matchTerms, 0, newMatchTerms, 0, numMatches);
>   matchTerms = newMatchTerms;
> }
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Adrien Grand
I can reproduce this one. It seems to me that the problem is that
MockDirectoryWrapper.getRecomputedSizeInBytes uses RAMFile.length
although RAMFile.length is only set on flush or seek and is 0 until
then? Should setFileLength be called after every write?

On Thu, May 16, 2013 at 8:35 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1623/
>
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull
>
> Error Message:
> did not hit disk full
>
> Stack Trace:
> java.lang.AssertionError: did not hit disk full
> at 
> __randomizedtesting.SeedInfo.seed([9159284DB5A0A12D:1FBA21C9A14FE9]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull(TestIndexWriterOnDiskFull.java:537)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at java.lang.Thread.run(Thread.java:679)
>
>
>
>
> Build Log:
> [...truncated 205 lines...]
> [junit4:junit4] Suite: org.apache.lucene.index.TestIndexWrite

[jira] [Updated] (LUCENE-4953) readerClosedListener is not invoked for ParallelCompositeReader's leaves

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4953:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483272

> readerClosedListener is not invoked for ParallelCompositeReader's leaves
> 
>
> Key: LUCENE-4953
> URL: https://issues.apache.org/jira/browse/LUCENE-4953
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4953.patch, LUCENE-4953.patch, LUCENE-4953.patch, 
> LUCENE-4953.patch, LUCENE-4953.patch, LUCENE-4953.patch, LUCENE-4953.patch, 
> LUCENE-4953.patch
>
>
> There was a test failure last night:
> {noformat}
> 1 tests failed.
> REGRESSION:  
> org.apache.lucene.search.grouping.AllGroupHeadsCollectorTest.testBasic
> Error Message:
> testBasic(org.apache.lucene.search.grouping.AllGroupHeadsCollectorTest): 
> Insane FieldCache usage(s) found expected:<0> but was:<2>
> Stack Trace:
> java.lang.AssertionError: 
> testBasic(org.apache.lucene.search.grouping.AllGroupHeadsCollectorTest): 
> Insane FieldCache usage(s) found expected:<0> but was:<2>
> at 
> __randomizedtesting.SeedInfo.seed([1F9C2A2AD23A8E02:B466373F0DE6082C]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at 
> org.apache.lucene.util.LuceneTestCase.assertSaneFieldCaches(LuceneTestCase.java:592)
> at 
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:55)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> at java.lang.Thread.run(Thread.java:722)
> Build Log:
> [...truncated 6904 lines...]
> [junit

[jira] [Updated] (LUCENE-4968) Several ToParentBlockJoinQuery/Collector issues

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4968:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483274

> Several ToParentBlockJoinQuery/Collector issues
> ---
>
> Key: LUCENE-4968
> URL: https://issues.apache.org/jira/browse/LUCENE-4968
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/join
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4968.patch
>
>
> I hit several issues with ToParentBlockJoinQuery/Collector:
>   * If a given Query sometimes has no child matches then we could hit
> AIOOBE, but should just get 0 children for that parent
>   * TPBJC.getTopGroups incorrectly throws IllegalArgumentException
> when the child query happens to have no matches
>   * We have checks that user didn't accidentally pass a child query
> that matches parent docs ... they are only assertions today but I
> think they should be real checks since it's easy to screw up

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Shai Erera
I hit a similar failure with the Replicator tests, or should I say "did not
hit the expected disk full".

I added a test to TestMDW.testDiskFull (fixing copyBytes to fail on
disk-full) but I put a comment on why you need to call flush().
Basically, you should call flush to ensure that bytes are not buffered.
Especially in a test which verifies 'disk full'.
As long as bytes are buffered, I think it's OK to not hit disk-full... they
never made it to the directory yet.

Shai


On Thu, May 16, 2013 at 12:12 PM, Adrien Grand  wrote:

> I can reproduce this one. It seems to me that the problem is that
> MockDirectoryWrapper.getRecomputedSizeInBytes uses RAMFile.length
> although RAMFile.length is only set on flush or seek and is 0 until
> then? Should setFileLength be called after every write?
>
> On Thu, May 16, 2013 at 8:35 AM, Apache Jenkins Server
>  wrote:
> > Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1623/
> >
> > 1 tests failed.
> > REGRESSION:
>  org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull
> >
> > Error Message:
> > did not hit disk full
> >
> > Stack Trace:
> > java.lang.AssertionError: did not hit disk full
> > at
> __randomizedtesting.SeedInfo.seed([9159284DB5A0A12D:1FBA21C9A14FE9]:0)
> > at org.junit.Assert.fail(Assert.java:93)
> > at
> org.apache.lucene.index.TestIndexWriterOnDiskFull.testImmediateDiskFull(TestIndexWriterOnDiskFull.java:537)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:616)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
> > at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> > at
> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
> > at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> > at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> > at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> > at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> > at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> > at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
> > at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
> > at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
> > at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
> > at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> > at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> > at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> > at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> > at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> > at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
> > at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> > at
>

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) - Build # 5618 - Still Failing!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5618/
Java: 32bit/jdk1.8.0-ea-b89 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 28537 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:383: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:60: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build.xml:306: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1639: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1673: 
Compile failed; see the compiler error output for details.

Total time: 47 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0-ea-b89 -server -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5002) Deadlock in DocumentsWriterFlushControl

2013-05-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659407#comment-13659407
 ] 

Uwe Schindler commented on LUCENE-5002:
---

bq. Yet, the problem is that we are locking on the index writer in DWPT.

My personal horror scenario! The worst thing you can do is to also externally 
synchronize on IW. This also causes deadlocks. We should maybe open an issue to 
fix the synchronization in IW and make it simplier, especially with using 
ju.concurrent.Lock implementations.

> Deadlock in DocumentsWriterFlushControl
> ---
>
> Key: LUCENE-5002
> URL: https://issues.apache.org/jira/browse/LUCENE-5002
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: OpenJDK 64-Bit Server VM (23.7-b01 mixed mode)
> Linux Ubuntu Server 12.04 LTS 64-Bit
>Reporter: Sergiusz Urbaniak
>Assignee: Simon Willnauer
> Attachments: LUCENE-5002_test.patch
>
>
> Hi all,
> We have an obvious deadlock between a "MaybeRefreshIndexJob" thread
> calling ReferenceManager.maybeRefresh(ReferenceManager.java:204) and a
> "RebuildIndexJob" thread calling
> IndexWriter.deleteAll(IndexWriter.java:2065).
> Lucene wants to flush in the "MaybeRefreshIndexJob" thread trying to 
> intrinsically lock the IndexWriter instance at 
> {{DocumentsWriterPerThread.java:563}} before notifyAll()ing the flush. 
> Simultaneously the "RebuildIndexJob" thread who already intrinsically locked 
> the IndexWriter instance at IndexWriter#deleteAll wait()s at 
> {{DocumentsWriterFlushControl.java:245}} for the flush forever causing a 
> deadlock.
> {code}
> "MaybeRefreshIndexJob Thread - 2" daemon prio=10 tid=0x7f8fe4006000 
> nid=0x1ac2 waiting for monitor entry [0x7f8fa7bf7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2223)
>   - waiting to lock <0xf1c00438> (a 
> org.apache.lucene.index.IndexWriter)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:563)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:533)
>   at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
>   at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
>   at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
>   - locked <0xf1c007d0> (a java.lang.Object)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
>   at 
> org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
>   at 
> org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
>   at jobs.MaybeRefreshIndexJob.timeout(MaybeRefreshIndexJob.java:47)
> "RebuildIndexJob Thread - 1" prio=10 tid=0x7f903000a000 nid=0x1a38 in 
> Object.wait() [0x7f9037dd6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitForFlush(DocumentsWriterFlushControl.java:245)
>   - locked <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at 
> org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:235)
>   - locked <0xf1c05370> (a 
> org.apache.lucene.index.DocumentsWriter)
>   at org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2065)
>   - locked <0xf1c00438> (a org.apache.lucene.index.IndexWriter)
>   at jobs.RebuildIndexJob.buildIndex(RebuildIndexJob.java:102)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.ap

RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) - Build # 5618 - Still Failing!

2013-05-16 Thread Uwe Schindler
It looks like ECJ is no longer able to read the latest Java 8 class files :(

-ecj-javadoc-lint-src:
 [ecj-lint] Compiling 676 source files
 [ecj-lint] Annotation processing got disabled, since it requires a 1.6 
compliant JVM
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/core/src/java/org/apache/lucene/analysis/Token.java
 (at line 22)
 [ecj-lint] import 
org.apache.lucene.analysis.tokenattributes.FlagsAttribute;
 [ecj-lint]^^
 [ecj-lint] The type java.lang.CharSequence cannot be resolved. It is 
indirectly referenced from required .class files

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> Sent: Thursday, May 16, 2013 11:50 AM
> To: dev@lucene.apache.org; u...@thetaphi.de; sha...@apache.org
> Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) - Build #
> 5618 - Still Failing!
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5618/
> Java: 32bit/jdk1.8.0-ea-b89 -server -XX:+UseConcMarkSweepGC
> 
> All tests passed
> 
> Build Log:
> [...truncated 28537 lines...]
> BUILD FAILED
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:383: The
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:60: The
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build.xml:306:
> The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
> build.xml:1639: The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
> build.xml:1673: Compile failed; see the compiler error output for details.
> 
> Total time: 47 minutes 36 seconds
> Build step 'Invoke Ant' marked build as failure Description set: Java:
> 32bit/jdk1.8.0-ea-b89 -server -XX:+UseConcMarkSweepGC Archiving
> artifacts Recording test results Email was triggered for: Failure Sending 
> email
> for trigger: Failure
> 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3422) IndeIndexWriter.optimize() throws FileNotFoundException and IOException

2013-05-16 Thread l0co (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659413#comment-13659413
 ] 

l0co commented on LUCENE-3422:
--

1. Files are immediately created in index directory, because the HS Workspace 
makes commit after each work.
2. There's only one IndexWriter using given index directory (for both update 
and merge)

Now, I've made some concurrent tests from which it looks that the concurrent 
model of merge is fine. This is probably the other problem on my side.

> IndeIndexWriter.optimize() throws FileNotFoundException and IOException
> ---
>
> Key: LUCENE-3422
> URL: https://issues.apache.org/jira/browse/LUCENE-3422
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Elizabeth Nisha
>
> I am using lucene 3.0.2 search APIs for my application. 
> Indexed data is about 350MB and time taken for indexing is 25 hrs. Search 
> indexing and Optimization runs in two different threads. Optimization runs 
> for every 1 hour and it doesn't run while indexing is going on and vice 
> versa. When optimization is going on using IndexWriter.optimize(), 
> FileNotFoundException and IOException are seen in my log and the index file 
> is getting corrupted, log says
> 1. java.io.IOException: No sub-file with id _5r8.fdt found 
> [The file name in this message changes over time (_5r8.fdt, _6fa.fdt, 
> _6uh.fdt, ..., _emv.fdt) ]
> 2. java.io.FileNotFoundException: 
> /local/groups/necim/index_5.3/index/_bdx.cfs (No such file or directory)  
> 3. java.io.FileNotFoundException: 
> /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
>   Stack trace: java.io.IOException: background merge hit exception: 
> _hkp:c100->_hkp _hkq:c100->_hkp _hkr:c100->_hkr _hks:c100->_hkr _hxb:c5500 
> _hx5:c1000 _hxc:c198
> 84 into _hxd [optimize] [mergeDocStores]
>at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2359)
>at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2298)
>at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2268)
>at com.telelogic.cs.search.SearchIndex.doOptimize(SearchIndex.java:130)
>at 
> com.telelogic.cs.search.SearchIndexerThread$1.run(SearchIndexerThread.java:337)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.FileNotFoundException: 
> /local/groups/necim/index_5.3/index/_hkq.cfs (No such file or directory)
>at java.io.RandomAccessFile.open(Native Method)
>at java.io.RandomAccessFile.(RandomAccessFile.java:212)
>at 
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.(SimpleFSDirectory.java:76)
>at 
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.(SimpleFSDirectory.java:97)
>at 
> org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.(NIOFSDirectory.java:87)
>at 
> org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
>at 
> org.apache.lucene.index.CompoundFileReader.(CompoundFileReader.java:67)
>at 
> org.apache.lucene.index.SegmentReader$CoreReaders.(SegmentReader.java:114)
>at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
>at 
> org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
>at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
>at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Adrien Grand
On Thu, May 16, 2013 at 11:30 AM, Shai Erera  wrote:
> As long as bytes are buffered, I think it's OK to not hit disk-full... they
> never made it to the directory yet.

Good point. So the test needs to call commit right after
IndexWriter.add(Document) to make sure flush gets called, and
MockDirectoryWrapper.flush needs to check disk full right after
delegate.flush?

-- 
Adrien


testImmediateDiskFull.patch
Description: Binary data

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b89) - Build # 5680 - Failure!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5680/
Java: 32bit/jdk1.8.0-ea-b89 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 28265 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:60: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:306: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1636:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1670:
 Compile failed; see the compiler error output for details.

Total time: 41 minutes 54 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.8.0-ea-b89 -client -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-4751) The replication problem of the file in a subdirectory.

2013-05-16 Thread Minoru Osuka (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Minoru Osuka updated SOLR-4751:
---

Attachment: SOLR-4751.patch

When the directory of a renaming place did not exist, it modified so that a 
directory will be created. 

The test code was satisfactory. 
Ran ant test -Dtestcase=TestReplicationHandler


> The replication problem of the file in a subdirectory.
> --
>
> Key: SOLR-4751
> URL: https://issues.apache.org/jira/browse/SOLR-4751
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.2.1
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-4751.patch, SOLR-4751.patch
>
>
> When set lang/stopwords_ja.txt to confFiles in replication settings,
> {code:xml}
>   
>
>  commit
>  startup
>   name="confFiles">schema.xml,stopwords.txt,lang/stopwords_ja.txt
>
>   
> {code}
> Only stopwords_ja.txt exists in solr/collection1/conf/lang directory on slave 
> node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4981) Deprecate PositionFilter

2013-05-16 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4981:
-

Attachment: LUCENE-4981.patch

Thanks for your feedback Steve. I updated the patch to say that the problems 
that were solved by PositionFilter should be solved at query parsing level, 
does it look better?

> Deprecate PositionFilter
> 
>
> Key: LUCENE-4981
> URL: https://issues.apache.org/jira/browse/LUCENE-4981
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-4981.patch, LUCENE-4981.patch
>
>
> According to the documentation 
> (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
>  PositionFilter is mainly useful to make query parsers generate boolean 
> queries instead of phrase queries although this problem can be solved at 
> query parsing level instead of analysis level (eg. using 
> QueryParser.setAutoGeneratePhraseQueries).
> So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
> propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1621 - Still Failing

2013-05-16 Thread Dawid Weiss
Hi Hoss,

> ...why is SolrRequestParsers 51 MB ?

> Instances of this class shouldn't by fluctuating in size, it doesn't
> maintain any mutable state -- so WTF?

You can try to debug it if you add an afterclass hook and dump the
"size tree" using, for example:
https://github.com/dweiss/java-sizeof/blob/master/src/main/java/com/carrotsearch/sizeof/ObjectTree.java

Typically what is happening is that there's a static field that holds
references to loggers, these in turn hold references to threads, these
to thread locals etc. Depends on the JVM and settings, really. I
assumed a class should nullify its static references after it's done
so an @AfterClass hook should be probably appropriate. Otherwise we
would miss statically allocated stuff and this can contribute to the
overall memory use.

> Don't get me wrong: RegexBoostProcessorTest should probably have an
> @AfterClass to null this out -- but i'm concerned that either:

Exactly.

> a) something has changed allowing SolrRequestParsers instances to now
> grow w/o bound
>
> b) something has changed to break the size detection code in the test
> framework.

(b) is unlikely. Dump the allocation tree and see what's the major
contributor to the size.

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4989) Hanging on DocumentsWriterStallControl.waitIfStalled forever

2013-05-16 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4989:
---

Fix Version/s: 4.3.1
   5.0

This looks bad ... we should fix for 4.3.1?

> Hanging on DocumentsWriterStallControl.waitIfStalled forever
> 
>
> Key: LUCENE-4989
> URL: https://issues.apache.org/jira/browse/LUCENE-4989
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.1
> Environment: Linux 2.6.32
>Reporter: Jessica Cheng
>  Labels: hang
> Fix For: 5.0, 4.3.1
>
>
> In an environment where our underlying storage was timing out on various 
> operations, we find all of our indexing threads eventually stuck in the 
> following state (so far for 4 days):
> "Thread-0" daemon prio=5 Thread id=556  WAITING
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterStallControl.waitIfStalled(DocumentsWriterStallControl.java:74)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitIfStalled(DocumentsWriterFlushControl.java:676)
>   at 
> org.apache.lucene.index.DocumentsWriter.preUpdate(DocumentsWriter.java:301)
>   at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:361)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1484)
>   at ...
> I have not yet enabled detail logging and tried to reproduce yet, but looking 
> at the code, I see that DWFC.abortPendingFlushes does
> try {
>   dwpt.abort();
>   doAfterFlush(dwpt);
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> }
> (and the same for the blocked ones). Since the throwable is ignored, I can't 
> say for sure, but I've seen DWPT.abort thrown in other cases, so if it does 
> throw, we'd fail to call doAfterFlush and properly decrement flushBytes. This 
> can be a problem, right? Is it possible to do this instead:
> try {
>   dwpt.abort();
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> } finally {
>   try {
> doAfterFlush(dwpt);
>   } catch (Throwable ex2) {
> // ignore - keep on aborting the flush queue
>   }
> }
> It's ugly but safer. Otherwise, maybe at least add logging for the throwable 
> just to make sure this is/isn't happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4583) StraightBytesDocValuesField fails if bytes > 32k

2013-05-16 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659447#comment-13659447
 ] 

Michael McCandless commented on LUCENE-4583:


{quote}
Mike, in your latest patch, one improvement that could be made is instead of 
Lucene42DocValuesConsumer assuming the limit is "ByteBlockPool.BYTE_BLOCK_SIZE 
- 2" (which it technically is but only by coincidence), you could instead 
reference a calculated constant shared with the actual code that has this limit 
which is Lucene42DocValuesProducer.loadBinary(). For example, set the constant 
to 2^16-2 but then add an assert in loadBinary that the constant is consistent 
with the PagedBytes instance's config. Or something like that.
{quote}

+1

But, again, let's keep this issue focused on not enforcing a limit in the core 
indexing code.

Per-codec limits are separate issues.



> StraightBytesDocValuesField fails if bytes > 32k
> 
>
> Key: LUCENE-4583
> URL: https://issues.apache.org/jira/browse/LUCENE-4583
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.0, 4.1, 5.0
>Reporter: David Smiley
>Priority: Critical
> Fix For: 4.4
>
> Attachments: LUCENE-4583.patch, LUCENE-4583.patch, LUCENE-4583.patch, 
> LUCENE-4583.patch, LUCENE-4583.patch
>
>
> I didn't observe any limitations on the size of a bytes based DocValues field 
> value in the docs.  It appears that the limit is 32k, although I didn't get 
> any friendly error telling me that was the limit.  32k is kind of small IMO; 
> I suspect this limit is unintended and as such is a bug.The following 
> test fails:
> {code:java}
>   public void testBigDocValue() throws IOException {
> Directory dir = newDirectory();
> IndexWriter writer = new IndexWriter(dir, writerConfig(false));
> Document doc = new Document();
> BytesRef bytes = new BytesRef((4+4)*4097);//4096 works
> bytes.length = bytes.bytes.length;//byte data doesn't matter
> doc.add(new StraightBytesDocValuesField("dvField", bytes));
> writer.addDocument(doc);
> writer.commit();
> writer.close();
> DirectoryReader reader = DirectoryReader.open(dir);
> DocValues docValues = MultiDocValues.getDocValues(reader, "dvField");
> //FAILS IF BYTES IS BIG!
> docValues.getSource().getBytes(0, bytes);
> reader.close();
> dir.close();
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4989) Hanging on DocumentsWriterStallControl.waitIfStalled forever

2013-05-16 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659448#comment-13659448
 ] 

Simon Willnauer commented on LUCENE-4989:
-

this might be related to LUCENE-5002 I think. this can happen in multiple 
scenarios. Can you tell if there are any other blocked threads in flush by any 
chance?



> Hanging on DocumentsWriterStallControl.waitIfStalled forever
> 
>
> Key: LUCENE-4989
> URL: https://issues.apache.org/jira/browse/LUCENE-4989
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.1
> Environment: Linux 2.6.32
>Reporter: Jessica Cheng
>  Labels: hang
> Fix For: 5.0, 4.3.1
>
>
> In an environment where our underlying storage was timing out on various 
> operations, we find all of our indexing threads eventually stuck in the 
> following state (so far for 4 days):
> "Thread-0" daemon prio=5 Thread id=556  WAITING
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterStallControl.waitIfStalled(DocumentsWriterStallControl.java:74)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitIfStalled(DocumentsWriterFlushControl.java:676)
>   at 
> org.apache.lucene.index.DocumentsWriter.preUpdate(DocumentsWriter.java:301)
>   at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:361)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1484)
>   at ...
> I have not yet enabled detail logging and tried to reproduce yet, but looking 
> at the code, I see that DWFC.abortPendingFlushes does
> try {
>   dwpt.abort();
>   doAfterFlush(dwpt);
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> }
> (and the same for the blocked ones). Since the throwable is ignored, I can't 
> say for sure, but I've seen DWPT.abort thrown in other cases, so if it does 
> throw, we'd fail to call doAfterFlush and properly decrement flushBytes. This 
> can be a problem, right? Is it possible to do this instead:
> try {
>   dwpt.abort();
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> } finally {
>   try {
> doAfterFlush(dwpt);
>   } catch (Throwable ex2) {
> // ignore - keep on aborting the flush queue
>   }
> }
> It's ugly but safer. Otherwise, maybe at least add logging for the throwable 
> just to make sure this is/isn't happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 81 - Failure

2013-05-16 Thread Michael McCandless
I'll fix ... looks like we need to carve out an exception for this
javax.servlet JAR...

Mike McCandless

http://blog.mikemccandless.com


On Thu, May 16, 2013 at 12:20 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/81/
>
> No tests ran.
>
> Build Log:
> [...truncated 33591 lines...]
> prepare-release-no-sign:
> [mkdir] Created dir: 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease
>  [copy] Copying 416 files to 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/lucene
>  [copy] Copying 194 files to 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/solr
>  [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
>  [exec] NOTE: output encoding is US-ASCII
>  [exec]
>  [exec] Load release URL 
> "file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/"...
>  [exec]
>  [exec] Test Lucene...
>  [exec]   test basics...
>  [exec]   get KEYS
>  [exec] 0.1 MB in 0.01 sec (10.2 MB/sec)
>  [exec]   check changes HTML...
>  [exec]   download lucene-5.0.0-src.tgz...
>  [exec] 26.6 MB in 0.08 sec (326.4 MB/sec)
>  [exec] verify md5/sha1 digests
>  [exec]   download lucene-5.0.0.tgz...
>  [exec] 50.2 MB in 0.09 sec (580.9 MB/sec)
>  [exec] verify md5/sha1 digests
>  [exec]   download lucene-5.0.0.zip...
>  [exec] 59.6 MB in 0.08 sec (709.5 MB/sec)
>  [exec] verify md5/sha1 digests
>  [exec]   unpack lucene-5.0.0.tgz...
>  [exec] verify JAR/WAR metadata...
>  [exec] Traceback (most recent call last):
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 1383, in 
>  [exec] main()
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 1327, in main
>  [exec] smokeTest(baseURL, svnRevision, version, tmpDir, isSigned)
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 1364, in smokeTest
>  [exec] unpackAndVerify('lucene', tmpDir, artifact, svnRevision, 
> version)
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 595, in unpackAndVerify
>  [exec] verifyUnpacked(project, artifact, unpackPath, svnRevision, 
> version, tmpDir)
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 720, in verifyUnpacked
>  [exec] checkAllJARs(os.getcwd(), project, svnRevision, version)
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 269, in checkAllJARs
>  [exec] noJavaPackageClasses('JAR file "%s"' % fullPath, fullPath)
>  [exec]   File 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py",
>  line 169, in noJavaPackageClasses
>  [exec] raise RuntimeError('%s contains sheisty class "%s"' %  (desc, 
> name2))
>  [exec] RuntimeError: JAR file 
> "/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/lucene-5.0.0/replicator/lib/javax.servlet-3.0.0.v201112011016.jar"
>  contains sheisty class "javax/servlet/AsyncContext.class"
>
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/build.xml:299:
>  exec returned: 1
>
> Total time: 23 minutes 29 seconds
> Build step 'Invoke Ant' marked build as failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Robert Muir
I dont get it. MDW wraps its IndexOutput so it "knows"... sounds like
the counting is off.

On Thu, May 16, 2013 at 6:24 AM, Adrien Grand  wrote:
> On Thu, May 16, 2013 at 11:30 AM, Shai Erera  wrote:
>> As long as bytes are buffered, I think it's OK to not hit disk-full... they
>> never made it to the directory yet.
>
> Good point. So the test needs to call commit right after
> IndexWriter.add(Document) to make sure flush gets called, and
> MockDirectoryWrapper.flush needs to check disk full right after
> delegate.flush?
>
> --
> Adrien
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5003:
-

 Summary: ECJ javadoc linting does not work with recent Java 8
 Key: LUCENE-5003
 URL: https://issues.apache.org/jira/browse/LUCENE-5003
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor


With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
no lonmger work:
- The version we currently can no longer parse the class files in rt.jar / or 
does no longer find them
- The latest version produces a compiler error, because it cannot handle some 
"default" interface method duplication in some Java Collections interfaces 
(CharArraySet fails)

I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5003:
--

Description: 
With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
no lonmger work:
- The version we currently use (3.7.2) can no longer parse the class files in 
rt.jar / or does no longer find them
- The latest version (4.2.2) produces a compiler error, because it cannot 
handle some "default" interface method duplication in some Java Collections 
interfaces (CharArraySet fails)

I will disable the ECJ linting for now with Java > 1.7

  was:
With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
no lonmger work:
- The version we currently can no longer parse the class files in rt.jar / or 
does no longer find them
- The latest version produces a compiler error, because it cannot handle some 
"default" interface method duplication in some Java Collections interfaces 
(CharArraySet fails)

I will disable the ECJ linting for now with Java > 1.7


> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5681 - Still Failing!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5681/
Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 28262 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:377: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:60: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:306: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1636:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1670:
 Compile failed; see the compiler error output for details.

Total time: 41 minutes 23 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) - Build # 5618 - Still Failing!

2013-05-16 Thread Uwe Schindler
I opened issue: https://issues.apache.org/jira/browse/LUCENE-5003

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Uwe Schindler [mailto:u...@thetaphi.de]
> Sent: Thursday, May 16, 2013 12:16 PM
> To: dev@lucene.apache.org
> Subject: RE: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) - Build #
> 5618 - Still Failing!
> 
> It looks like ECJ is no longer able to read the latest Java 8 class files :(
> 
> -ecj-javadoc-lint-src:
>  [ecj-lint] Compiling 676 source files
>  [ecj-lint] Annotation processing got disabled, since it requires a 1.6 
> compliant
> JVM  [ecj-lint] --  [ecj-lint] 1. ERROR in
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-
> Linux/lucene/core/src/java/org/apache/lucene/analysis/Token.java (at line
> 22)
>  [ecj-lint]   import
> org.apache.lucene.analysis.tokenattributes.FlagsAttribute;
>  [ecj-lint]  ^^
>  [ecj-lint] The type java.lang.CharSequence cannot be resolved. It is 
> indirectly
> referenced from required .class files
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
> > -Original Message-
> > From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> > Sent: Thursday, May 16, 2013 11:50 AM
> > To: dev@lucene.apache.org; u...@thetaphi.de; sha...@apache.org
> > Subject: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b89) -
> > Build #
> > 5618 - Still Failing!
> >
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5618/
> > Java: 32bit/jdk1.8.0-ea-b89 -server -XX:+UseConcMarkSweepGC
> >
> > All tests passed
> >
> > Build Log:
> > [...truncated 28537 lines...]
> > BUILD FAILED
> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:383: The
> > following error occurred while executing this line:
> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:60: The
> > following error occurred while executing this line:
> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build.xml:306:
> > The following error occurred while executing this line:
> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
> > build.xml:1639: The following error occurred while executing this line:
> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-
> > build.xml:1673: Compile failed; see the compiler error output for details.
> >
> > Total time: 47 minutes 36 seconds
> > Build step 'Invoke Ant' marked build as failure Description set: Java:
> > 32bit/jdk1.8.0-ea-b89 -server -XX:+UseConcMarkSweepGC Archiving
> > artifacts Recording test results Email was triggered for: Failure
> > Sending email for trigger: Failure
> >
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5003:
--

Attachment: LUCENE-5003.patch

Patch, similar to documentation-lint

> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
> Attachments: LUCENE-5003.patch
>
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4830) Node doesn't recover properly after fail, when running multiple collections on same nodes with ZooKeeper

2013-05-16 Thread Johannes Henrysson (JIRA)
Johannes Henrysson created SOLR-4830:


 Summary: Node doesn't recover properly after fail, when running 
multiple collections on same nodes with ZooKeeper
 Key: SOLR-4830
 URL: https://issues.apache.org/jira/browse/SOLR-4830
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Johannes Henrysson


Created 3 collections (yp, test, hubba) with 2 shards each, on 4 nodes. So all 
3 collections used same nodes.

This worked out quite nice, until I tested to kill one node. When trying to 
recover, only first collection worked out. The node stayed as down on the other 
to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-4829:
--

 Summary: transaction log reference leak
 Key: SOLR-4829
 URL: https://issues.apache.org/jira/browse/SOLR-4829
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


Failure to dereference tlogs or RecentUpdates can cause old transaction logs to 
never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659472#comment-13659472
 ] 

Yonik Seeley commented on SOLR-4829:


After a code review, one source leak is in ElectionContext.java:
{code}
  if (!success && ulog.getRecentUpdates().getVersions(1).isEmpty()) {
{code}
introduced in SOLR-3933 (Solr 4.1)

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4830) Node doesn't recover properly after fail, when running multiple collections on same nodes with ZooKeeper

2013-05-16 Thread Johannes Henrysson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johannes Henrysson updated SOLR-4830:
-

Attachment: pic1 - killed machine.png
pic2 - machine down.png
pic3 - end of test.png

Screenshots.
- Pic1, after killing one node.
- Pic2, node is down.
- Pic3, final result. One collection worked as should, while the other two 
never recovered.

> Node doesn't recover properly after fail, when running multiple collections 
> on same nodes with ZooKeeper
> 
>
> Key: SOLR-4830
> URL: https://issues.apache.org/jira/browse/SOLR-4830
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3
>Reporter: Johannes Henrysson
> Attachments: pic1 - killed machine.png, pic2 - machine down.png, pic3 
> - end of test.png
>
>
> Created 3 collections (yp, test, hubba) with 2 shards each, on 4 nodes. So 
> all 3 collections used same nodes.
> This worked out quite nice, until I tested to kill one node. When trying to 
> recover, only first collection worked out. The node stayed as down on the 
> other to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-4829:
---

Fix Version/s: 4.3.1
   5.0
Affects Version/s: 4.1

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5003:
--

Attachment: LUCENE-5003.patch

Slightly better patch without .

> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
> Attachments: LUCENE-5003.patch, LUCENE-5003.patch
>
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-4829:
--

Assignee: Yonik Seeley

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Adrien Grand
On Thu, May 16, 2013 at 1:58 PM, Robert Muir  wrote:
> I dont get it. MDW wraps its IndexOutput so it "knows"... sounds like
> the counting is off.

The problem is that RAMDirectory delays the counting.
MockDirectoryWrapper.getRecomputedActualSizeInBytes sums all the
lengths of the existing RAMFiles to get the actual size, but
RAMFile.length is only updated after a RAMOutputStream seek or flush.
This means that if you write 5 bytes, then 3 bytes, RAMFile.length
will still be 0 and then suddenly upon flush it will become 5+3=8.

Using the Mock IndexOutput to track bytes is an option, but I was
thinking it could be interesting too to see what happens with
directories that buffer content so that the disk full exception
happens in flush instead of writeBytes?

-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b89) - Build # 5620 - Failure!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/5620/
Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 28537 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:383: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:60: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build.xml:306: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1639: 
The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1673: 
Compile failed; see the compiler error output for details.

Total time: 41 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b89 -XX:-UseCompressedOops 
-XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5003.
---

   Resolution: Fixed
Fix Version/s: 4.3
   5.0

> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-5003.patch, LUCENE-5003.patch
>
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659497#comment-13659497
 ] 

Adrien Grand commented on LUCENE-5003:
--

Thanks Uwe for taking care of this!

> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-5003.patch, LUCENE-5003.patch
>
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_21) - Build # 2825 - Still Failing!

2013-05-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/2825/
Java: 32bit/jdk1.7.0_21 -server -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.embedded.TestEmbeddedSolrServer.testShutdown

Error Message:


Stack Trace:
org.apache.solr.common.SolrException: 
at 
__randomizedtesting.SeedInfo.seed([2436F2CA6629E4E3:C740FB5F81537191]:0)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:262)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219)
at org.apache.solr.core.CoreContainer.(CoreContainer.java:149)
at 
org.apache.solr.client.solrj.embedded.AbstractEmbeddedSolrServerTestCase.setUp(AbstractEmbeddedSolrServerTestCase.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: The filename, directory name, or volume label 
syntax is incorrect
at java.io.WinNTFileSystem.canonicalize0(Native Method)
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:414)
at java.io.File.getCanonicalPath(File.java:589)
at 
org.apache.solr.core.ConfigSolrXmlOld.initCoreList(ConfigSolr

[jira] [Commented] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659503#comment-13659503
 ] 

Uwe Schindler commented on LUCENE-5003:
---

Background information:
- This change makes ECJ fail in 4.2.2: 
http://hg.openjdk.java.net/jdk8/jdk8/jdk/rev/7857129859b


> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-5003.patch, LUCENE-5003.patch
>
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5003) ECJ javadoc linting does not work with recent Java 8

2013-05-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659503#comment-13659503
 ] 

Uwe Schindler edited comment on LUCENE-5003 at 5/16/13 1:14 PM:


Background information: Mainly this change makes ECJ fail in our currently used 
version: http://hg.openjdk.java.net/jdk8/jdk8/jdk/rev/7857129859b


  was (Author: thetaphi):
Background information:
- This change makes ECJ fail in 4.2.2: 
http://hg.openjdk.java.net/jdk8/jdk8/jdk/rev/7857129859b

  
> ECJ javadoc linting does not work with recent Java 8
> 
>
> Key: LUCENE-5003
> URL: https://issues.apache.org/jira/browse/LUCENE-5003
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: 5.0, 4.3
>
> Attachments: LUCENE-5003.patch, LUCENE-5003.patch
>
>
> With jdk8-b89, the linting of javadocs with Eclipse's JDT compiler (ECJ) does 
> no lonmger work:
> - The version we currently use (3.7.2) can no longer parse the class files in 
> rt.jar / or does no longer find them
> - The latest version (4.2.2) produces a compiler error, because it cannot 
> handle some "default" interface method duplication in some Java Collections 
> interfaces (CharArraySet fails)
> I will disable the ECJ linting for now with Java > 1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4970) NGramPhraseQuery is not boosted.

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4970:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483347

> NGramPhraseQuery is not boosted.
> 
>
> Key: LUCENE-4970
> URL: https://issues.apache.org/jira/browse/LUCENE-4970
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.1
>Reporter: Shingo Sasaki
>Assignee: Adrien Grand
> Fix For: 4.3.1
>
> Attachments: LUCENE-4970.patch
>
>
> If I apply setBoost() method to NGramPhraseQuery, Score will not change.
> I think, setBoost() is forgatten after optimized in rewrite() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659509#comment-13659509
 ] 

Mark Miller commented on SOLR-4829:
---

It's actually SOLR-3939

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5002) Deadlock in DocumentsWriterFlushControl

2013-05-16 Thread Sergiusz Urbaniak (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659508#comment-13659508
 ] 

Sergiusz Urbaniak commented on LUCENE-5002:
---

Hi,

Thanks for the quick feedback! As long as the sync issues on IW are unresolved 
we declare IW instances as *not thread-safe* for our development and 
synchronize access to it externally (of course as mentioned in the docs not on 
the IW instance itself).

> Deadlock in DocumentsWriterFlushControl
> ---
>
> Key: LUCENE-5002
> URL: https://issues.apache.org/jira/browse/LUCENE-5002
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.3
> Environment: OpenJDK 64-Bit Server VM (23.7-b01 mixed mode)
> Linux Ubuntu Server 12.04 LTS 64-Bit
>Reporter: Sergiusz Urbaniak
>Assignee: Simon Willnauer
> Attachments: LUCENE-5002_test.patch
>
>
> Hi all,
> We have an obvious deadlock between a "MaybeRefreshIndexJob" thread
> calling ReferenceManager.maybeRefresh(ReferenceManager.java:204) and a
> "RebuildIndexJob" thread calling
> IndexWriter.deleteAll(IndexWriter.java:2065).
> Lucene wants to flush in the "MaybeRefreshIndexJob" thread trying to 
> intrinsically lock the IndexWriter instance at 
> {{DocumentsWriterPerThread.java:563}} before notifyAll()ing the flush. 
> Simultaneously the "RebuildIndexJob" thread who already intrinsically locked 
> the IndexWriter instance at IndexWriter#deleteAll wait()s at 
> {{DocumentsWriterFlushControl.java:245}} for the flush forever causing a 
> deadlock.
> {code}
> "MaybeRefreshIndexJob Thread - 2" daemon prio=10 tid=0x7f8fe4006000 
> nid=0x1ac2 waiting for monitor entry [0x7f8fa7bf7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.index.IndexWriter.useCompoundFile(IndexWriter.java:2223)
>   - waiting to lock <0xf1c00438> (a 
> org.apache.lucene.index.IndexWriter)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:563)
>   at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:533)
>   at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
>   at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
>   at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
>   - locked <0xf1c007d0> (a java.lang.Object)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
>   at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
>   at 
> org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
>   at 
> org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
>   at 
> org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
>   at jobs.MaybeRefreshIndexJob.timeout(MaybeRefreshIndexJob.java:47)
> "RebuildIndexJob Thread - 1" prio=10 tid=0x7f903000a000 nid=0x1a38 in 
> Object.wait() [0x7f9037dd6000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitForFlush(DocumentsWriterFlushControl.java:245)
>   - locked <0xf1c0c240> (a 
> org.apache.lucene.index.DocumentsWriterFlushControl)
>   at 
> org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:235)
>   - locked <0xf1c05370> (a 
> org.apache.lucene.index.DocumentsWriter)
>   at org.apache.lucene.index.IndexWriter.deleteAll(IndexWriter.java:2065)
>   - locked <0xf1c00438> (a org.apache.lucene.index.IndexWriter)
>   at jobs.RebuildIndexJob.buildIndex(RebuildIndexJob.java:102)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4974) CommitIndexTask is broken if no params are set

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4974:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483349.

> CommitIndexTask is broken if no params are set
> --
>
> Key: LUCENE-4974
> URL: https://issues.apache.org/jira/browse/LUCENE-4974
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/benchmark
>Reporter: Shai Erera
>Assignee: Shai Erera
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4974.patch
>
>
> If you put a CommitIndex in a benchmark algorithm with no params, you get NPE 
> from IW.setCommitData, because you are not allowed to pass null. It's a 
> trivial fix - CommitIndexTask should call setCommitData only if commitData is 
> not null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659512#comment-13659512
 ] 

Mark Miller commented on SOLR-4829:
---

This one actually occured to me when i was reading the user thread on this the 
other day - it didn't seem like the culprit for that guy though because it only 
happens on election (unless he was losing the leader consistently for some ugly 
reason).

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-4829:
---

Attachment: SOLR-4829.patch

Here's a patch that should hopefully fix things up wrt getRecentUpdates.

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
> Attachments: SOLR-4829.patch
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659520#comment-13659520
 ] 

Mark Miller commented on SOLR-4829:
---

This looks like it reintroduces the NPE you can get with no ulog in 
ElectionContext - when I put in the null check yesterday or the day before, I 
was torn between just letting the node become leader if it has no ulog and was 
active and throwing a specific exception about having no ulog - i ended up 
choosing the former thinking if we didn't want to support no ulog in solrcloud 
mode, that should be checked on startup.

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
> Attachments: SOLR-4829.patch
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4986) NRT reader doesn't see changes after successful IW.tryDeleteDocument

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4986:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483358.

I had to change the TestTryDelete test to use NRTManager.TrackingIndexWriter 
because TrackingIndexWriter only became an independent class since LUCENE-4967

> NRT reader doesn't see changes after successful IW.tryDeleteDocument
> 
>
> Key: LUCENE-4986
> URL: https://issues.apache.org/jira/browse/LUCENE-4986
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4986.patch, LUCENE-4986.patch
>
>
> Reported by Reg on the java-user list, subject 
> "TrackingIndexWriter.tryDeleteDocument(IndexReader, int) vs 
> deleteDocuments(Query)":
> When IW.tryDeleteDocument succeeds, it marks the document as deleted in the 
> pending BitVector in ReadersAndLiveDocs, but then when the NRT reader checks 
> if it's still current by calling IW.nrtIsCurrent, we fail to catch changes to 
> the BitVector, resulting in the NRT reader thinking it's current and not 
> reopening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4991) QueryParser doesnt handle synonyms correctly for chinese

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4991:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483364

> QueryParser doesnt handle synonyms correctly for chinese
> 
>
> Key: LUCENE-4991
> URL: https://issues.apache.org/jira/browse/LUCENE-4991
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Reporter: Robert Muir
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4991.patch
>
>
> As reported multiple times on the user list:
> http://find.searchhub.org/document/eaf0e88a6a0d4d1f
> http://find.searchhub.org/document/abf28043c52b6efc
> http://find.searchhub.org/document/1313794632c90826
> The logic here is not forming the right query structures and ignoring 
> positionIncrementAttribute from the tokenStream.
> * when default operator is AND, you can see it more clearly, as synonyms are 
> wrongly inserted as additional MUST terms:
> expected:<+field:中 +(field:国 field:國)> 
> but was:<+field:中 +field:国 +field:國>
> * even when default operator is OR, its still wrong, because we ignore posInc 
> and this means coord computation is not correct (so scoring is wrong)
> This also screws up scoring and queries for decompounding too (because they 
> go thru this exact situation if they add the original compound as a synonym).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659533#comment-13659533
 ] 

Yonik Seeley commented on SOLR-4829:


bq. This looks like it reintroduces the NPE you can get with no ulog in 
ElectionContext

Ah, thanks - I got a merge conflict and then missed your update.  I'll fix.

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
> Attachments: SOLR-4829.patch
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4829) transaction log reference leak

2013-05-16 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-4829:
---

Attachment: SOLR-4829.patch

> transaction log reference leak
> --
>
> Key: SOLR-4829
> URL: https://issues.apache.org/jira/browse/SOLR-4829
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.0, 4.3.1
>
> Attachments: SOLR-4829.patch, SOLR-4829.patch
>
>
> Failure to dereference tlogs or RecentUpdates can cause old transaction logs 
> to never be closed & deleted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4994) PatternKeywordMarkerFilter is final and has protected ctor and cannot be instantiated by non-Lucene code

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4994:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483372

> PatternKeywordMarkerFilter is final and has protected ctor and cannot be 
> instantiated by non-Lucene code
> 
>
> Key: LUCENE-4994
> URL: https://issues.apache.org/jira/browse/LUCENE-4994
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.3.1
>
>
> I tried to write a test for LUCENE-4993 but recognized that a copy'n'paste 
> error made the ctor of this filter protected.
> The sister SetKeywordMarkerFilter has public ctor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4993) BeiderMorseFilter inserts tokens with positionIncrement=0, but ignores all custom attributes except OffsetAttribute

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4993:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483376

> BeiderMorseFilter inserts tokens with positionIncrement=0, but ignores all 
> custom attributes except OffsetAttribute
> ---
>
> Key: LUCENE-4993
> URL: https://issues.apache.org/jira/browse/LUCENE-4993
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.3
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4993.patch
>
>
> BeiderMorseFilter inserts sometimes additional phonetic tokens for the same 
> source token. Currently it calls clearAttributes before doing this and sets 
> the new token's term, positionIncrement=0 and the original offset.
> This leads to problems if the TokenStream contains other attributes inserted 
> before (like KeywordAttribute, FlagsAttribute,...). Those are all reverted to 
> defaults for the inserted tokens.
> The TokenFilter should remove the special case done for preserving offsets 
> and instead to captureState() and restoreState().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4944) changes2html.pl does not detect duplicate sections in the changes.txt

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4944:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483377 and r1483379.

> changes2html.pl does not detect duplicate sections in the changes.txt
> -
>
> Key: LUCENE-4944
> URL: https://issues.apache.org/jira/browse/LUCENE-4944
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.2
>Reporter: Uwe Schindler
> Fix For: 4.3.1
>
>
> When reviewing the release artifacts of Lucene 4.3, I noticed that 
> CHANGES.txt contains a section "api changes" 2 times. The changes2html 
> converter should maybe complain about that and fail the build. Otherwise the 
> generated HTML contains the same anchor element two times for one release and 
> the open/close logic breaks (it only open/closes the first one, although you 
> click on the second one).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4938) IndexSearcher.search() with sort doesnt do min(maxdoc, n)

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4938:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483384.

> IndexSearcher.search() with sort doesnt do min(maxdoc, n)
> -
>
> Key: LUCENE-4938
> URL: https://issues.apache.org/jira/browse/LUCENE-4938
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4938.patch, LUCENE-4938.patch, LUCENE-4938.patch
>
>
> It does this without a sort though.
> This caused TestFunctionQuerySort.testSearchAfterWhenSortingByFunctionValues 
> to OOM (why only sometimes?)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3225) highlighting of queries does not works in solr4.0

2013-05-16 Thread Matthias Herrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659570#comment-13659570
 ] 

Matthias Herrmann commented on SOLR-3225:
-

What exactly do you mean by saying ??You can still use highlighting on any of 
the fields in the "to" document?? ? I made a simple example by indexing 
example/exampledocs/*.xml which ships with standard Solr distribution. Having 
these documents indexed to the example server I run the following query:
[http://localhost:8983/solr/collection1/select?q=belkin&defType=lucene&wt=json&indent=true&hl=true&hl.fl=*
 ]
In the query result the section "highlighting" looks like:
{code}
"highlighting":{
"F8V7067-APL-KIT":{
  "name":["Belkin Mobile Power Cord for iPod w/ Dock"],
  "manu_id_s":["belkin"],
  "manu":["Belkin"]},
"IW-02":{
  "manu_id_s":["belkin"],
  "manu":["Belkin"]}}
{code}
So highlighting works fine. BUT when running this query: 
[http://localhost:8983/solr/collection1/select?q=\{!join+from=id+to=id\}belkin&defType=lucene&wt=json&indent=true&hl=true&hl.fl=*]
In the query result the section "highlighting" looks like:
{code}
"highlighting":{
"F8V7067-APL-KIT":{},
"IW-02":{}}
{code}
As you can see highlighting does not work in combination with join. Is this a 
bug or am I missing something?


> highlighting of queries does not works in solr4.0
> -
>
> Key: SOLR-3225
> URL: https://issues.apache.org/jira/browse/SOLR-3225
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.0-ALPHA
>Reporter: sumit pathak
>  Labels: documentation
>
> q= {!join from=manu_id_s to=id}ipod
> by highlighting this query it does not highlights the required field ,hence 
> highlighting not works in join queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Steven Bower (JIRA)
Steven Bower created SOLR-4831:
--

 Summary: Transaction logs are leaking
 Key: SOLR-4831
 URL: https://issues.apache.org/jira/browse/SOLR-4831
 Project: Solr
  Issue Type: Bug
Reporter: Steven Bower


We have a system in which a client is sending 1 record at a time (via REST) 
followed by a commit. This has produced ~65k tlog files and the JVM has run out 
of file descriptors... I grabbed a heap dump from the JVM and I can see ~52k 
"unreachable" FileDescriptors... This leads me to believe that the 
TransactionLog is not properly closing all of it's files before getting rid of 
the object... 

I've verified with lsof that indeed there are ~60k tlog files that are open 
currently..

This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659581#comment-13659581
 ] 

Steven Bower commented on SOLR-4831:


Looking at the timestamps on the tlog files they seem to have all been created 
around the same time (04:55).. starting around this time I start seeing the 
exception below (there were 1628).. in fact its getting tons of these (200k+) 
but most of the time inside regular commits...

{noformat}
2013-15-05 04:55:06.634 ERROR UpdateLog [recoveryExecutor-6-thread-7922] - 
java.lang.ArrayIndexOutOfBoundsException: 2603
at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:146)
at 
org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.nextDoc(Lucene41PostingsReader.java:492)
at 
org.apache.lucene.index.BufferedDeletesStream.applyTermDeletes(BufferedDeletesStream.java:407)
at 
org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:273)
at 
org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2973)
at 
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2964)
at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2704)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2839)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2819)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:536)
at 
org.apache.solr.update.UpdateLog$LogReplayer.doReplay(UpdateLog.java:1339)
at org.apache.solr.update.UpdateLog$LogReplayer.run(UpdateLog.java:1163)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
{noformat}

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659583#comment-13659583
 ] 

Steven Bower commented on SOLR-4831:


I bounced the server, removed all the tlog files and started back up and it 
immediately starts back into the same state.. within a few minutes there are 
12k tlog files again.. This is under the same type of load (doc / commit)..

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659591#comment-13659591
 ] 

Steven Bower commented on SOLR-4831:


It seems the index is corrupt in some way.. when I stopped all traffic and then 
issue an optimize i get the exception below:

{noformat}
2013-16-05 10:36:53.816 INFO  UpdateHandler [qtp1333933549-202] - start 
commit{,optimize=true,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
2013-16-05 10:36:53.822 ERROR SolrCore [qtp1333933549-202] - 
java.io.IOException: background merge hit exception: _2nces(4.3):C26466/6 
_2nce7(4.3):C26466/53 _2ncew(4.3):C6/1 _2nceu(4.3):C4 _2ncf1(4.3):C4 
_2ncf8(4.3):C2 _2ncey(4.3):C2 _2ncf4(4.3):C4 _2ncf9(4.3):C3 _2ncet(4.3):C2 
_2ncf5(4.3):C1 _2ncez(4.3):C1 _2ncex(4.3):C1 _2ncf6(4.3):C1 _2ncf7(4.3):C1 
_2ncf0(4.3):C1/1 into _2ncfj [maxNumSegments=1]
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1686)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1622)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:519)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1109)
at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:488)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:932)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:994)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.ArrayIndexOutOfBoundsException
{noformat}

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> follo

[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659592#comment-13659592
 ] 

Shalin Shekhar Mangar commented on SOLR-4831:
-

Probably the same issue as SOLR-4829 ?

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659599#comment-13659599
 ] 

Yonik Seeley commented on SOLR-4831:


Looks like the root cause is index corruption.

Could you run check index to verify?
{code}
java -cp ./solr-webapp/webapp/WEB-INF/lib/lucene-core*jar 
-ea:org.apache.lucene... org.apache.lucene.index.CheckIndex 
solr/collection1/index
{code}

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659599#comment-13659599
 ] 

Yonik Seeley edited comment on SOLR-4831 at 5/16/13 2:49 PM:
-

Looks like the root cause is index corruption.

Could you run check index to verify?
{code}
java -cp ./solr-webapp/webapp/WEB-INF/lib/lucene-core*jar 
-ea:org.apache.lucene... org.apache.lucene.index.CheckIndex 
solr/collection1/data/index
{code}

  was (Author: ysee...@gmail.com):
Looks like the root cause is index corruption.

Could you run check index to verify?
{code}
java -cp ./solr-webapp/webapp/WEB-INF/lib/lucene-core*jar 
-ea:org.apache.lucene... org.apache.lucene.index.CheckIndex 
solr/collection1/index
{code}
  
> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4823) Split LBHttpSolrServer into two classes one for the solrj use case and one for the solr cloud use case

2013-05-16 Thread philip hoy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

philip hoy updated SOLR-4823:
-

Attachment: SOLR-4823.patch

Here is a first stab at a refactorig, it is without any additional test 
coverage at present and may well be a bit to much to swallow. However I am 
happy to revisit it. Interestingly moving the cloud load balancing code out of 
LBHttpSolrServer did not affect any tests so perhaps that use case could use 
some extra test coverage.

> Split LBHttpSolrServer into two classes one for the solrj use case and one 
> for the solr cloud use case
> --
>
> Key: SOLR-4823
> URL: https://issues.apache.org/jira/browse/SOLR-4823
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: philip hoy
>Priority: Minor
> Attachments: SOLR-4823.patch
>
>
> The LBHttpSolrServer has too many responsibilities. It could perhaps be 
> broken into two classes, one in solrj to be used in the place of an external 
> load balancer that balances across a known set of solr servers defined at 
> construction time and one in solr core to be used by the solr cloud 
> components that balances across servers dependant on the request.
> To save code duplication, if much arises an abstract bass class could be 
> introduced in to solrj.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4823) Split LBHttpSolrServer into two classes one for the solrj use case and one for the solr cloud use case

2013-05-16 Thread philip hoy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659604#comment-13659604
 ] 

philip hoy edited comment on SOLR-4823 at 5/16/13 2:57 PM:
---

Here is a first stab at a refactoring, it is without any additional test 
coverage at present and may well be a bit too much to swallow. However I am 
happy to revisit it. 

Interestingly moving the cloud load balancing code out of LBHttpSolrServer did 
not affect any tests so perhaps that use case could do with some extra test 
coverage.


  was (Author: phloy):
Here is a first stab at a refactorig, it is without any additional test 
coverage at present and may well be a bit to much to swallow. However I am 
happy to revisit it. Interestingly moving the cloud load balancing code out of 
LBHttpSolrServer did not affect any tests so perhaps that use case could use 
some extra test coverage.
  
> Split LBHttpSolrServer into two classes one for the solrj use case and one 
> for the solr cloud use case
> --
>
> Key: SOLR-4823
> URL: https://issues.apache.org/jira/browse/SOLR-4823
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: philip hoy
>Priority: Minor
> Attachments: SOLR-4823.patch
>
>
> The LBHttpSolrServer has too many responsibilities. It could perhaps be 
> broken into two classes, one in solrj to be used in the place of an external 
> load balancer that balances across a known set of solr servers defined at 
> construction time and one in solr core to be used by the solr cloud 
> components that balances across servers dependant on the request.
> To save code duplication, if much arises an abstract bass class could be 
> introduced in to solrj.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659607#comment-13659607
 ] 

Yonik Seeley commented on SOLR-4831:


Was this index ever replicated (i.e. is this node part of a solr cloud cluster, 
or is it a slave)?


> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4981) Deprecate PositionFilter

2013-05-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659608#comment-13659608
 ] 

Steve Rowe commented on LUCENE-4981:


+1

> Deprecate PositionFilter
> 
>
> Key: LUCENE-4981
> URL: https://issues.apache.org/jira/browse/LUCENE-4981
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-4981.patch, LUCENE-4981.patch
>
>
> According to the documentation 
> (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
>  PositionFilter is mainly useful to make query parsers generate boolean 
> queries instead of phrase queries although this problem can be solved at 
> query parsing level instead of analysis level (eg. using 
> QueryParser.setAutoGeneratePhraseQueries).
> So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
> propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4981) Deprecate PositionFilter

2013-05-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659608#comment-13659608
 ] 

Steve Rowe edited comment on LUCENE-4981 at 5/16/13 3:00 PM:
-

+1, thanks Adrien!

  was (Author: steve_rowe):
+1
  
> Deprecate PositionFilter
> 
>
> Key: LUCENE-4981
> URL: https://issues.apache.org/jira/browse/LUCENE-4981
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-4981.patch, LUCENE-4981.patch
>
>
> According to the documentation 
> (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PositionFilterFactory),
>  PositionFilter is mainly useful to make query parsers generate boolean 
> queries instead of phrase queries although this problem can be solved at 
> query parsing level instead of analysis level (eg. using 
> QueryParser.setAutoGeneratePhraseQueries).
> So given that PositionFilter corrupts token graphs (see TestRandomChains), I 
> propose to deprecate it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Help working with patch for SOLR-3076 (Block Joins)

2013-05-16 Thread Tom Burton-West
Thanks Shawn and Vadim,

I'll try the July patch against  r1351040 of 4_x for now.
 Vadim, I'm in no hurry, but I'll watch 3076 for your patch and work with
that when you post it.


Tom


On Thu, May 16, 2013 at 2:14 AM, Vadim Kirilchuk <
vkirilc...@griddynamics.com> wrote:

> Hi,
>
> As far as i know, patch from 16/Jul/12 was created for branch 4.x, and
> SOLR-3076-childDocs.patch
> from 12/Oct/12 is a little bit reworked SOLR-3076 (for branch 4.x too).
>
> However, they may be not up to date even for 4.x, because of trunk back
> merges (i'am not sure).
>
> Also as mentioned by Shawn, you should use p1 instead of p0.
>
> P/s actually i have reworked version for trunk, i can post it in a week if
> you need.
>
> On Thu, May 16, 2013 at 3:46 AM, Shawn Heisey  wrote:
>
>> On 5/15/2013 5:42 PM, Shawn Heisey wrote:
>>
>>> Through a little detective work, I figured out that it would apply
>>> cleanly to revision 1351040 of trunk.  When I then tried to do 'svn up'
>>> to bring the tree current, there were merge conflicts that will have to
>>> be manually fixed.
>>>
>>
>> It applied also to that revision of branch_4x, and I think there were
>> fewer merge conflicts there, too.  It looks like you want 4x, so that's
>> probably a good thing.
>>
>>
>> Thanks,
>> Shawn
>>
>>
>> --**--**-
>> To unsubscribe, e-mail: 
>> dev-unsubscribe@lucene.apache.**org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[jira] [Commented] (SOLR-4448) Allow the solr internal load balancer to be more easily pluggable.

2013-05-16 Thread philip hoy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659610#comment-13659610
 ] 

philip hoy commented on SOLR-4448:
--

I have added a jira to cover a potential refactoring to split out of few of the 
responsibilities currently carried out by the LBHttpSolrServer class.

> Allow the solr internal load balancer to be more easily pluggable.
> --
>
> Key: SOLR-4448
> URL: https://issues.apache.org/jira/browse/SOLR-4448
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: philip hoy
>Priority: Minor
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4448.patch, SOLR-4448.patch
>
>
> Widen some access level modifiers to allow the load balancer to be extended 
> and plugged into an HttpShardHandler instance using an extended 
> HttpShardHandlerFactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659615#comment-13659615
 ] 

Steven Bower commented on SOLR-4831:


Yes it was part of a cloud.. it was the leader...

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4831) Transaction logs are leaking

2013-05-16 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659616#comment-13659616
 ] 

Steven Bower commented on SOLR-4831:


I am also not certain the index corruption was the root cause (need to do some 
digging).. or if the corruption was related to running out of file 
descriptors...

> Transaction logs are leaking
> 
>
> Key: SOLR-4831
> URL: https://issues.apache.org/jira/browse/SOLR-4831
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Bower
>
> We have a system in which a client is sending 1 record at a time (via REST) 
> followed by a commit. This has produced ~65k tlog files and the JVM has run 
> out of file descriptors... I grabbed a heap dump from the JVM and I can see 
> ~52k "unreachable" FileDescriptors... This leads me to believe that the 
> TransactionLog is not properly closing all of it's files before getting rid 
> of the object... 
> I've verified with lsof that indeed there are ~60k tlog files that are open 
> currently..
> This is Solr 4.3.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4949) nightly builds have wrong version - need to simplify jenkins config tweaks needed after a release

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated LUCENE-4949:
--

Fix Version/s: (was: 4.4)
   4.3.1
   Labels:   (was: lucene-4.3.1-candidate)

Back ported to 4.3.1 r1483404.

> nightly builds have wrong version - need to simplify jenkins config tweaks 
> needed after a release
> -
>
> Key: LUCENE-4949
> URL: https://issues.apache.org/jira/browse/LUCENE-4949
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.3.1
>
> Attachments: LUCENE-4949.patch, LUCENE-4949.patch
>
>
> Right now, if you look at the configuration for these two apache jenkins 
> jobs...
> * https://builds.apache.org/job/Lucene-Artifacts-4.x/
> * https://builds.apache.org/job/Solr-Artifacts-4.x/
> ..you can see that even though they are building off of the 4.x branch, and 
> even though the 4.x branch says the next version is 4.4, the artifacts from 
> these jobs are labeled as if they will be 4.1 releases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1482642 - /lucene/dev/branches/lucene_solr_4_3/lucene/core/src/java/org/apache/lucene/util/Constants.java

2013-05-16 Thread Shalin Shekhar Mangar
I have backported LUCENE-4949 to the 4.3 branch.


On Wed, May 15, 2013 at 9:00 PM, Uwe Schindler  wrote:

> Hi,
>
> ok, I just wanted to be sure!
>
> > I double checked (also 3.6.1, 3.6.2, and 4.2.1 releases): hossman's
> commit is
> > correct actually.
> >
> > This is the one used for back compat (its tested with the version
> > comparator), but its just marking which version of lucene wrote the
> > segment. The variable name should really be changed, "MAIN" says nothing.
> >
> > This one is the more important one though: another reason why its not
> > triggered by the version sysprop from build.xml: our comparator cannot
> deal
> > with any -SNAPSHOT or any maven suffixes or any of that horseshit. it
> needs
> > real version numbers.
>
> In trunk and branch_4x we already changed the numbering in common-build to
> fix some bugs with Jenkins! We have now:
>   
> We should maybe backport this commit to 4.3, too.
>
> So we can use this to maybe check consistency or pass this version somehow
> to tests.
>
> Our version comparator can handle those versions, so it expands missing
> parts with ".0", so e.g. "4.3.1" is considered greater than "4.3" (which is
> treated as "4.3.0").
>
> > On Wed, May 15, 2013 at 7:57 AM, Robert Muir  wrote:
> > > On Wed, May 15, 2013 at 4:23 AM, Uwe Schindler 
> > wrote:
> > >> Are we sure that this is the right thing? The LUCENE_MAIN_VERSION is
> > used for index compatibility and should always be only in X.Y format.
> > >>
> > >> Please revert this!
> > >>
> > >
> > > Uwe is correct: please only adjust build.xml here or whatever, but
> > > don't change this!
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
> > commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Created] (SOLR-4832) Unable to open new searcher

2013-05-16 Thread Christian Schramm (JIRA)
Christian Schramm created SOLR-4832:
---

 Summary: Unable to open new searcher
 Key: SOLR-4832
 URL: https://issues.apache.org/jira/browse/SOLR-4832
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3
 Environment: Debian Squeeze, Zookeeper 3.4.5
Reporter: Christian Schramm
Priority: Blocker


I'm using a freshly installed Solr 4.3.0 on Debian Squeeze. Whenever I access 
the webinterface I get:

Unable to load environment info from /agent/admin/system?wt=json.
This interface requires that you activate the admin request handlers in all 
SolrCores by adding the following configuration to your solrconfig.xml:



where agent is the name of my core. The above line exists in solrconfig.xml. 
When I call /agent/admin/system?wt=xml I get:


Error opening new searcher

org.apache.solr.common.SolrException: Error opening new searcher at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1434) at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1546) at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1319) at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1254) at 
org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:94)
 at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcLastModified(HttpCacheHeaderUtil.java:145)
 at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:218)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560) 
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382) 
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) 
at org.eclipse.jetty.server.Server.handle(Server.java:365) at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
 at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926)
 at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635) at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
at java.lang.Thread.run(Thread.java:662) Caused by: 
org.apache.lucene.store.AlreadyClosedException: Already closed at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:336)
 at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:246) at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1341) ... 33 more

500



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4734) Leader election fails with an NPE if there is no UpdateLog.

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4734:


Labels: solr-4.3.1-candidate  (was: )

I'll backport this to 4.3.1 if there are no objections.

> Leader election fails with an NPE if there is no UpdateLog.
> ---
>
> Key: SOLR-4734
> URL: https://issues.apache.org/jira/browse/SOLR-4734
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3, 4.2.1
> Environment: Linux 64bit on 3.2.0-33-generic kernel
> Solr: 4.2.1
> ZooKeeper: 3.4.5
> Tomcat 7.0.27 
>Reporter: Alexander Eibner
>Assignee: Mark Miller
>Priority: Minor
>  Labels: solr-4.3.1-candidate
> Fix For: 5.0, 4.4
>
> Attachments: config-logs.zip
>
>
> The following setup and steps always lead to the same error:
> app01: ZooKeeper
> app02: ZooKeeper, Solr (in Tomcat)
> app03: ZooKeeper, Solr (in Tomcat) 
> *) Start ZooKeeper as ensemble on all machines.
> *) Start tomcat on app02/app03
> {code:javascript|title=clusterstate.json}
> null
> cZxid = 0x10014
> ctime = Thu Apr 18 10:59:24 CEST 2013
> mZxid = 0x10014
> mtime = Thu Apr 18 10:59:24 CEST 2013
> pZxid = 0x10014
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x0
> dataLength = 0
> numChildren = 0
> {code}
> *) Upload the configuration (on app02) for the collection via the following 
> command:
> {noformat}
> zkcli.sh -cmd upconfig --zkhost app01:4181,app02:4181,app03:4181 
> --confdir config/solr/storage/conf/ --confname storage-conf 
> {noformat}
> *) Linking the configuration (on app02) via the following command:
> {noformat}
> zkcli.sh -cmd linkconfig --collection storage --confname storage-conf 
> --zkhost app01:4181,app02:4181,app03:4181
> {noformat}
> *) Create Collection via: 
> {noformat}
> http://app02/solr/admin/collections?action=CREATE&name=storage&numShards=1&replicationFactor=2&collection.configName=storage-conf
> {noformat}
> {code:javascript|title=clusterstate.json}
> {"storage":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "app02:9985_solr_storage_shard1_replica2":{
> "shard":"shard1",
> "state":"down",
> "core":"storage_shard1_replica2",
> "collection":"storage",
> "node_name":"app02:9985_solr",
> "base_url":"http://app02:9985/solr"},
>   "app03:9985_solr_storage_shard1_replica1":{
> "shard":"shard1",
> "state":"down",
> "core":"storage_shard1_replica1",
> "collection":"storage",
> "node_name":"app03:9985_solr",
> "base_url":"http://app03:9985/solr",
> "router":"compositeId"}}
> cZxid = 0x10014
> ctime = Thu Apr 18 10:59:24 CEST 2013
> mZxid = 0x10047
> mtime = Thu Apr 18 11:04:06 CEST 2013
> pZxid = 0x10014
> cversion = 0
> dataVersion = 2
> aclVersion = 0
> ephemeralOwner = 0x0
> dataLength = 847
> numChildren = 0
> {code}
> This creates the replication of the shard on app02 and app03, but neither of 
> them is marked as leader, both are marked as DOWN.
> And after wards I can not access the collection.
> In the browser I get:
> {noformat}
> "SEVERE: org.apache.solr.common.SolrException: no servers hosting shard:"
> {noformat}
> The following stacktrace in the logs:
> {code}
> Apr 18, 2013 11:04:05 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'storage_shard1_replica2': 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:483)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:140)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:591)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:192)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168)
> at 
> org.apache.catalin

[jira] [Updated] (SOLR-4741) Deleting a collection should set DELETE_DATA_DIR to true.

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4741:


Labels: solr-4.3.1-candidate  (was: )

I'll backport this 4.3.1 if there are no objections.

> Deleting a collection should set DELETE_DATA_DIR to true.
> -
>
> Key: SOLR-4741
> URL: https://issues.apache.org/jira/browse/SOLR-4741
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
>  Labels: solr-4.3.1-candidate
> Fix For: 5.0, 4.4
>
>
> Currently we remove the instance dir, which usually contains the data dir, 
> but it won't always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4752) There are some minor bugs in the Collections API error handling.

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4752:


Labels: solr-4.3.1-candidate  (was: )

I'll backport this to 4.3.1 if there are no objections.

> There are some minor bugs in the Collections API error handling.
> 
>
> Key: SOLR-4752
> URL: https://issues.apache.org/jira/browse/SOLR-4752
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
>  Labels: solr-4.3.1-candidate
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4752.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4563) RSS DIH-example not working

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4563:


Labels: solr-4.3.1-candidate  (was: )

I'll backport this to 4.3.1 if there are no objections.

> RSS DIH-example not working
> ---
>
> Key: SOLR-4563
> URL: https://issues.apache.org/jira/browse/SOLR-4563
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: solr-4.3.1-candidate
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4563.patch
>
>
> The xpath paths of /rss/item do not match the real world RSS feed which uses 
> /rss/channel/item

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4796) zkcli.sh should honor JAVA_HOME

2013-05-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4796:


Labels: solr-4.3.1-candidate  (was: )

I'll backport this to 4.3.1 if there are no objections.

> zkcli.sh should honor JAVA_HOME
> ---
>
> Key: SOLR-4796
> URL: https://issues.apache.org/jira/browse/SOLR-4796
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.2
>Reporter: Roman Shaposhnik
>Assignee: Mark Miller
>  Labels: solr-4.3.1-candidate
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4796.patch.txt
>
>
> On a system with GNU java installed the fact that zkcli.sh doesn't honor 
> JAVA_HOME could lead to hard to diagnose failure:
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org.apache.solr.cloud.ZkCLI
>at gnu.java.lang.MainThread.run(libgcj.so.7rh)
> Caused by: java.lang.ClassNotFoundException: org.apache.solr.cloud.ZkCLI not 
> found in gnu.gcj.runtime.SystemClassLoader{urls=[], 
> parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}
>at java.net.URLClassLoader.findClass(libgcj.so.7rh)
>at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
>at java.lang.ClassLoader.loadClass(libgcj.so.7rh)
>at gnu.java.lang.MainThread.run(libgcj.so.7rh)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Shai Erera
I think this exception should be thrown only when the bytes land in the
Directory. I think that in general we buffer bytes before sending them to
the actual IndexOutput? I don't know if this is done with RAMDir too i.e.
that there isn't code out there that buffers its writes to any IndexOutput,
not caring if it's RAMFile or not.

And technically, if you have a buffer of 16K, but can only write 2K to the
underlying Directory, you shouldn't hit the exception until you actually
flush the bytes?

Adrien, not in front of the code now, but if writeBytes applies the limit,
why do we need any special logic in MDW.flush?

Shai


On Thu, May 16, 2013 at 3:45 PM, Adrien Grand  wrote:

> On Thu, May 16, 2013 at 1:58 PM, Robert Muir  wrote:
> > I dont get it. MDW wraps its IndexOutput so it "knows"... sounds like
> > the counting is off.
>
> The problem is that RAMDirectory delays the counting.
> MockDirectoryWrapper.getRecomputedActualSizeInBytes sums all the
> lengths of the existing RAMFiles to get the actual size, but
> RAMFile.length is only updated after a RAMOutputStream seek or flush.
> This means that if you write 5 bytes, then 3 bytes, RAMFile.length
> will still be 0 and then suddenly upon flush it will become 5+3=8.
>
> Using the Mock IndexOutput to track bytes is an option, but I was
> thinking it could be interesting too to see what happens with
> directories that buffer content so that the disk full exception
> happens in flush instead of writeBytes?
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1623 - Still Failing

2013-05-16 Thread Robert Muir
On Thu, May 16, 2013 at 8:45 AM, Adrien Grand  wrote:

> Using the Mock IndexOutput to track bytes is an option, but I was
> thinking it could be interesting too to see what happens with
> directories that buffer content so that the disk full exception
> happens in flush instead of writeBytes?
>

But it can easily happen in both places...

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4773) New discovery mode needs to ensure that instanceDir is correct

2013-05-16 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13659676#comment-13659676
 ] 

Shawn Heisey commented on SOLR-4773:


[~andyfowler] - I finally found some time and looked into this more deeply in 
relation to 4.3.0.  The patch for this issue does not fix the problem with 
relative dataDirs in the released version.  The patch as-is won't apply to 
4.3.0 because the code in branch_4x was significantly refactored, but I found 
the right place to apply the change from getPath to getCanonicalPath, and it 
didn't help.  It did fix an exception during startup (solrconfig.xml couldn't 
be found) when I followed your simple instructions for running the multicore 
example with discovery, but only core0 started, core1 didn't, because it had 
the same dataDir as core0.

[~markrmil...@gmail.com] - Do you happen to know how we can fix the problem 
with relative or missing dataDir properties in the 4.3 branch?  Would the 
change be trivial enough to make it to 4.3.1?  Discovery mode is essentially 
broken at the moment in the 4.3.0 release unless you have full absolute paths 
that are explicitly declared in the properties file.  This is not how I want 
things to work in my own setup.


> New discovery mode needs to ensure that instanceDir is correct
> --
>
> Key: SOLR-4773
> URL: https://issues.apache.org/jira/browse/SOLR-4773
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0, 4.4
>Reporter: Erick Erickson
>Assignee: Mark Miller
> Fix For: 5.0, 4.4
>
> Attachments: SOLR-4773.patch, SOLR-4773.patch
>
>
> Doing a fresh checkout of 4.x (trunk to to I think) and firing up the example 
> fails because we can't find solrconfig. The construction of the instanceDir 
> in SolrCoreDiscoverer constructs a path with an extra solr (e.g. 
> solr/solr/core).
> I'll attach a patch shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >