[jira] [Created] (SOLR-8562) BalanceShardUnique and Migrate should extend CollectionSpecificAdminRequest

2016-01-17 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-8562:
---

 Summary: BalanceShardUnique and Migrate should extend 
CollectionSpecificAdminRequest
 Key: SOLR-8562
 URL: https://issues.apache.org/jira/browse/SOLR-8562
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Assignee: Varun Thacker
 Fix For: 5.5, Trunk


Currently BalanceShardUnique and Migrate extend CollectionAdminRequest but 
since they are collection specific action they should extend 
CollectionSpecificAdminRequest instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6896) Fix/document various Similarity bugs around extreme norm values

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104896#comment-15104896
 ] 

ASF subversion and git services commented on LUCENE-6896:
-

Commit 1725178 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1725178 ]

LUCENE-6896: don't treat smallest possible norm value as an infinitely long doc 
in SimilarityBase or BM25Similarity

> Fix/document various Similarity bugs around extreme norm values
> ---
>
> Key: LUCENE-6896
> URL: https://issues.apache.org/jira/browse/LUCENE-6896
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6896.patch
>
>
> Spinoff from LUCENE-6818:
> [~iorixxx] found problems with every Similarity (except ClassicSimilarity) 
> when trying to test how they behave on every possible norm value, to ensure 
> they are robust for all index-time boosts.
> There are several problems:
> 1. buggy normalization decode that causes the smallest possible norm value 
> (0) to be treated as an infinitely long document. These values are intended 
> to be encoded as non-negative finite values, but going to infinity breaks 
> everything.
> 2. various problems in the less practical functions that already have 
> documented warnings that they do bad things for extreme values. These impact 
> DFR models D, Be, and P and IB distribution SPL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.4-Linux (64bit/jdk-9-ea+95) - Build # 403 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/403/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testIndexingWithTikaEntityProcessor

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([75B3D85CD20DE58F:286FD9C9F2F06B3D]:0)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:146)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:159)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:417)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:481)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.util.TestHarness.query(TestHarness.java:311)
at 
org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:87)
at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testIndexingWithTikaEntityProcessor(TestTikaEntityProcessor.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOn

[jira] [Commented] (SOLR-8561) Add fallback to ZkController.getLeaderProps for a mixed 5.4-pre-5.4 deployments

2016-01-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104201#comment-15104201
 ] 

Shai Erera commented on SOLR-8561:
--

FYI, all tests pass.

> Add fallback to ZkController.getLeaderProps for a mixed 5.4-pre-5.4 
> deployments
> ---
>
> Key: SOLR-8561
> URL: https://issues.apache.org/jira/browse/SOLR-8561
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: SOLR-8561.patch
>
>
> See last comments in SOLR-7844. The latter changed the structure of the 
> leader path in ZK such that upgrading from pre-5.4 to 5.4 is impossible, 
> unless all nodes are taken down. This issue adds a fallback logic to look for 
> the leader properties on the old ZK node, as discussed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15283 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15283/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
1) Thread[id=14, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)  
   at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=14, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([30543028976D0AF7]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=14, 
name=Thread-2, state=TIMED_WAITING, group=TGRP-MorphlineMapperTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)  
   at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=14, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([30543028976D0AF7]:0)


FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
Malformed / non-existent locale:  near: { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_30543028976D0AF7-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 158 "solrLocator" : { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_30543028976D0AF7-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 158 "collection" : "collection1" }, # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_30543028976D0AF7-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 202 "lowernames" : true, # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_30543028976D0AF7-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 207 #  Tika parsers to be registered. If multiple parsers support the same 
MIME type,  #  the parser is chosen that is closest to the bottom in this 
list: "parsers" : [ # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_30543028976D0AF7-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 208 { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMa

[JENKINS-EA] Lucene-Solr-5.4-Linux (32bit/jdk-9-ea+95) - Build # 402 - Failure!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/402/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseConcMarkSweepGC -XX:-CompactStrings

3 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testEmbeddedDocsLegacy

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([47153743FB214368:4AB146BA5DF5046D]:0)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:146)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:159)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:417)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:481)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.util.TestHarness.query(TestHarness.java:311)
at 
org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:87)
at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testEmbeddedDocsLegacy(TestTikaEntityProcessor.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingO

[jira] [Updated] (SOLR-8561) Add fallback to ZkController.getLeaderProps for a mixed 5.4-pre-5.4 deployments

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-8561:
-
Attachment: SOLR-8561.patch

[~markrmil...@gmail.com], same patch as I added on SOLR-7844. I will add the 
CHANGES entry after I know to which section it belongs (i.e. if we make it to 
5.4.1).

> Add fallback to ZkController.getLeaderProps for a mixed 5.4-pre-5.4 
> deployments
> ---
>
> Key: SOLR-8561
> URL: https://issues.apache.org/jira/browse/SOLR-8561
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: SOLR-8561.patch
>
>
> See last comments in SOLR-7844. The latter changed the structure of the 
> leader path in ZK such that upgrading from pre-5.4 to 5.4 is impossible, 
> unless all nodes are taken down. This issue adds a fallback logic to look for 
> the leader properties on the old ZK node, as discussed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-7844:
-
Attachment: (was: SOLR-7844.patch)

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved SOLR-7844.
--
Resolution: Fixed

Opened SOLR-8561

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8561) Add fallback to ZkController.getLeaderProps for a mixed 5.4-pre-5.4 deployments

2016-01-17 Thread Shai Erera (JIRA)
Shai Erera created SOLR-8561:


 Summary: Add fallback to ZkController.getLeaderProps for a mixed 
5.4-pre-5.4 deployments
 Key: SOLR-8561
 URL: https://issues.apache.org/jira/browse/SOLR-8561
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shai Erera
Assignee: Shai Erera


See last comments in SOLR-7844. The latter changed the structure of the leader 
path in ZK such that upgrading from pre-5.4 to 5.4 is impossible, unless all 
nodes are taken down. This issue adds a fallback logic to look for the leader 
properties on the old ZK node, as discussed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104144#comment-15104144
 ] 

Shai Erera commented on SOLR-7844:
--

OK I will open a separate issue.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104142#comment-15104142
 ] 

Mark Miller commented on SOLR-7844:
---

I can look in the morning. We should use the other issue and just link to this 
one. This issue is released and essentially frozen now. 

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-7844:
-
Attachment: SOLR-7844.patch

[~markrmil...@gmail.com], can you please review this patch? If it looks OK to 
you, I'd like to try get it out in 5.4.1.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera reopened SOLR-7844:
--

Reopening to adddress "upgrade from 5.3" issue.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 913 - Still Failing

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/913/

2 tests failed.
FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun

Error Message:
Failed on local exception: java.io.IOException: Connection reset by peer; Host 
Details : local host is: "lucene1-us-west/10.41.0.5"; destination host is: 
"lucene1-us-west.apache.org":50207; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: Connection 
reset by peer; Host Details : local host is: "lucene1-us-west/10.41.0.5"; 
destination host is: "lucene1-us-west.apache.org":50207; 
at 
__randomizedtesting.SeedInfo.seed([4FC21169B110E377:4190A567B086D178]:0)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy46.getClusterMetrics(Unknown Source)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy47.getClusterMetrics(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:461)
at 
org.apache.hadoop.mapred.ResourceMgrDelegate.getClusterMetrics(ResourceMgrDelegate.java:151)
at 
org.apache.hadoop.mapred.YARNRunner.getClusterMetrics(YARNRunner.java:179)
at 
org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:246)
at org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:719)
at org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:717)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:717)
at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:645)
at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:608)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun(MorphlineBasicMiniMRTest.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15282 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15282/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
1) Thread[id=15, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)  
   at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=15, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([4112C2C3BC74833D]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=15, 
name=Thread-2, state=TIMED_WAITING, group=TGRP-MorphlineMapperTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)  
   at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=15, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([4112C2C3BC74833D]:0)


FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
Malformed / non-existent locale:  near: { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_4112C2C3BC74833D-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 198 "fmap" : { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_4112C2C3BC74833D-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 198 "content-type" : "content_type", # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_4112C2C3BC74833D-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 198 "content" : "text" }, # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_4112C2C3BC74833D-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 207 #  Tika parsers to be registered. If multiple parsers support the same 
MIME type,  #  the parser is chosen that is closest to the bottom in this 
list: "parsers" : [ # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_4112C2C3BC74833D-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 208 { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMappe

[jira] [Comment Edited] (SOLR-8522) ImplicitSnitch to support IPv4 based tags

2016-01-17 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104043#comment-15104043
 ] 

Arcadius Ahouansou edited comment on SOLR-8522 at 1/18/16 1:44 AM:
---

Thank you very much [~noble.paul]ul] for taking the time to look into this

{quote}
The solr node string may not always be an IP address. It could be something 
like {[host:port}} . So IP address needs a lookup
{quote}
You are right about this. I was not aware of this.
Turned out that a user could start Solr with -Dhost=someHostName.
Doing the lookup as suggested is quite simple. However, 1 host could have 
multiple public and private IPs. We could pick the first public one or 
something...

This led me to start contemplating the idea of a more generic snitch that will 
deal with host names as well as IPs like
{{192.168.1.2 -> host_1=2, host_2=1, host_3=168, host_4=192}}
and
{{serv1.dc1.london.uk.apache.org -> host_1=org, host_2=apache, host_3=uk, 
host_4=london, host_5=dc1, host_6=serv1}} 

Any comment about this?


{quote}Let's start from least significant to most significant{quote}
Yes, makes sense


{quote}Do not blindly add a tag . Add if it is only requested{quote}

The current implementation adds only the tags that are requested.
The one that are not requested are not added to the response.
This is tested in 
- {{testGetTagsWithEmptyIPv4RequestedTag()}} where no tag is requested -> none 
returned, and
 
- {{testGetTagsWithIPv4RequestedTags_ip_2_ip_4()}} where only 2 tags are 
requested leading to only 2 out of 4 being returned 


Please let me know about the idea of a more generic snitch that could handle 
host names as well.

Many thanks



was (Author: arcadius):
Thank you very much [~noble.paul]ul] for taking the time to look into this

{quote}
The solr node string may not always be an IP address. It could be something 
like {[host:port}} . So IP address needs a lookup
{quote}
You are right about this. I was not aware of this.
Turned out that a user could start Solr with -Dhost=someHostName.
Doing the lookup as suggested is quite simple. However, 1 host could have 
multiple public and private IPs. We could pick the first public one or 
something...

This led me to start contemplating the idea of a more generic snitch like
{{192.168.1.2 -> host_1=2, host_2=1, host_3=168, host_4=192}}
and
{{serv1.dc1.london.uk.apache.org -> host_1=org, host_2=apache, host_3=uk, 
host_4=london, host_5=dc1, host_6=serv1}} 

Any comment about this?


{quote}Let's start from least significant to most significant{quote}
Yes, makes sense


{quote}Do not blindly add a tag . Add if it is only requested{quote}

The current implementation adds only the tags that are requested.
The one that are not requested are not added to the response.
This is tested in 
- {{testGetTagsWithEmptyIPv4RequestedTag()}} where no tag is requested -> none 
returned, and
 
- {{testGetTagsWithIPv4RequestedTags_ip_2_ip_4()}} where only 2 tags are 
requested leading to only 2 out of 4 being returned 


Please let me know about the idea of a more generic snitch that could handle 
host names as well.

Many thanks


> ImplicitSnitch to support IPv4 based tags
> -
>
> Key: SOLR-8522
> URL: https://issues.apache.org/jira/browse/SOLR-8522
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8522.patch
>
>
> This is a description from [~noble.paul]'s comment on SOLR-8146
> Lets assume a Solr node IPv4 address is 192.93.255.255 .
> This is about enhancing the current {{ImplicitSnitch}}  to support IP based 
> tags like:
> - {{ip_1 = 192}}
> - {{ip_2 = 93}}
> - {{ip_3 = 255}}
> - {{ip_4 = 255}}
> Note that IPv6 support will be implemented by a separate ticket



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8522) ImplicitSnitch to support IPv4 based tags

2016-01-17 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104043#comment-15104043
 ] 

Arcadius Ahouansou commented on SOLR-8522:
--

Thank you very much [~noble.paul]ul] for taking the time to look into this

{quote}
The solr node string may not always be an IP address. It could be something 
like {[host:port}} . So IP address needs a lookup
{quote}
You are right about this. I was not aware of this.
Turned out that a user could start Solr with -Dhost=someHostName.
Doing the lookup as suggested is quite simple. However, 1 host could have 
multiple public and private IPs. We could pick the first public one or 
something...

This led me to start contemplating the idea of a more generic snitch like
{{192.168.1.2 -> host_1=2, host_2=1, host_3=168, host_4=192}}
and
{{serv1.dc1.london.uk.apache.org -> host_1=org, host_2=apache, host_3=uk, 
host_4=london, host_5=dc1, host_6=serv1}} 

Any comment about this?


{quote}Let's start from least significant to most significant{quote}
Yes, makes sense


{quote}Do not blindly add a tag . Add if it is only requested{quote}

The current implementation adds only the tags that are requested.
The one that are not requested are not added to the response.
This is tested in 
- {{testGetTagsWithEmptyIPv4RequestedTag()}} where no tag is requested -> none 
returned, and
 
- {{testGetTagsWithIPv4RequestedTags_ip_2_ip_4()}} where only 2 tags are 
requested leading to only 2 out of 4 being returned 


Please let me know about the idea of a more generic snitch that could handle 
host names as well.

Many thanks


> ImplicitSnitch to support IPv4 based tags
> -
>
> Key: SOLR-8522
> URL: https://issues.apache.org/jira/browse/SOLR-8522
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8522.patch
>
>
> This is a description from [~noble.paul]'s comment on SOLR-8146
> Lets assume a Solr node IPv4 address is 192.93.255.255 .
> This is about enhancing the current {{ImplicitSnitch}}  to support IP based 
> tags like:
> - {{ip_1 = 192}}
> - {{ip_2 = 93}}
> - {{ip_3 = 255}}
> - {{ip_4 = 255}}
> Note that IPv6 support will be implemented by a separate ticket



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6980) Default to applying deletes when opening NRT reader from writer

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104026#comment-15104026
 ] 

ASF subversion and git services commented on LUCENE-6980:
-

Commit 1725162 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1725162 ]

LUCENE-6980: default applyDeletes to true when opening NRT readers

> Default to applying deletes when opening NRT reader from writer
> ---
>
> Key: LUCENE-6980
> URL: https://issues.apache.org/jira/browse/LUCENE-6980
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6980.patch
>
>
> Today, {{DirectoryReader.open}}, etc., all require you to pass a
> supremely expert {{boolean applyDeletes}}.  I think the vast majority
> of users should just default this to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6980) Default to applying deletes when opening NRT reader from writer

2016-01-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6980.

Resolution: Fixed

> Default to applying deletes when opening NRT reader from writer
> ---
>
> Key: LUCENE-6980
> URL: https://issues.apache.org/jira/browse/LUCENE-6980
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6980.patch
>
>
> Today, {{DirectoryReader.open}}, etc., all require you to pass a
> supremely expert {{boolean applyDeletes}}.  I think the vast majority
> of users should just default this to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6980) Default to applying deletes when opening NRT reader from writer

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104015#comment-15104015
 ] 

ASF subversion and git services commented on LUCENE-6980:
-

Commit 1725160 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1725160 ]

LUCENE-6980: default applyDeletes to true when opening NRT readers

> Default to applying deletes when opening NRT reader from writer
> ---
>
> Key: LUCENE-6980
> URL: https://issues.apache.org/jira/browse/LUCENE-6980
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6980.patch
>
>
> Today, {{DirectoryReader.open}}, etc., all require you to pass a
> supremely expert {{boolean applyDeletes}}.  I think the vast majority
> of users should just default this to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk-9-ea+95) - Build # 15281 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15281/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseParallelGC -XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=6731, 
name=zkCallback-738-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=6413, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[8B0971160861CDAC]-SendThread(127.0.0.1:36563),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)3) 
Thread[id=6732, name=zkCallback-738-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=6414, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[8B0971160861CDAC]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
5) Thread[id=6415, name=zkCallback-738-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=6731, name=zkCallback-738-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632)
at java.lang.Thread.run(Thread.java:747)
   2) Thread[id=6413, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[8

[jira] [Resolved] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-17 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar resolved SOLR-8418.
-
   Resolution: Fixed
Fix Version/s: (was: 5.5)
   5.4.1

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: Trunk, 5.4.1
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103963#comment-15103963
 ] 

ASF subversion and git services commented on SOLR-8418:
---

Commit 1725146 from [~andyetitmoves] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1725146 ]

SOLR-8418: Move CHANGES.txt entry to 5.4.1

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103962#comment-15103962
 ] 

ASF subversion and git services commented on SOLR-8418:
---

Commit 1725145 from [~andyetitmoves] in branch 'dev/trunk'
[ https://svn.apache.org/r1725145 ]

SOLR-8418: Move CHANGES.txt entry to 5.4.1

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.1 RC1

2016-01-17 Thread Ramkumar R. Aiyengar
Thanks for waiting Adrien. I have now backported SOLR-8418 to 5.4.

On Sun, Jan 17, 2016 at 10:01 PM, Adrien Grand  wrote:

> Le dim. 17 janv. 2016 à 04:00, Mark Miller  a
> écrit :
>
>> Always nice to be patient and accommodating for reroll 1. You know, since
>> we are all friends here. Best to get picky later on.
>
>
> I don't think it is fair to qualify the fact that we should release the
> corruption fix as soon as possible as picky.
>
> I can't make everybody happy here as there are two valid opposite
> arguments that we should release the corruption fix as soon as possible on
> the one hand and that it is ridiculous to release Solr with such a major
> bug on the other hand.
>
> I won't release the current artifacts and will respin tomorrow morning EU
> time (about 12 hours from now). If someone could backport SOLR-8418 to the
> 5.4 branch until then, that would be great, otherwise I will do it myself
> before building the RC.
>



-- 
Not sent from my iPhone or my Blackberry or anyone else's


[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103960#comment-15103960
 ] 

ASF subversion and git services commented on LUCENE-6590:
-

Commit 1725144 from [~andyetitmoves] in branch 'dev/branches/lucene_solr_5_4'
[ https://svn.apache.org/r1725144 ]

SOLR-8418: Adapt to changes in LUCENE-6590 for use of boosts with MLTHandler 
and Simple/CloudMLTQParser

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103959#comment-15103959
 ] 

ASF subversion and git services commented on SOLR-8418:
---

Commit 1725144 from [~andyetitmoves] in branch 'dev/branches/lucene_solr_5_4'
[ https://svn.apache.org/r1725144 ]

SOLR-8418: Adapt to changes in LUCENE-6590 for use of boosts with MLTHandler 
and Simple/CloudMLTQParser

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-17 Thread Ramkumar Aiyengar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Aiyengar reopened SOLR-8418:
-

Reopening to backport to 5.4.1

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.1 RC1

2016-01-17 Thread Adrien Grand
Le dim. 17 janv. 2016 à 04:00, Mark Miller  a écrit :

> Always nice to be patient and accommodating for reroll 1. You know, since
> we are all friends here. Best to get picky later on.


I don't think it is fair to qualify the fact that we should release the
corruption fix as soon as possible as picky.

I can't make everybody happy here as there are two valid opposite arguments
that we should release the corruption fix as soon as possible on the one
hand and that it is ridiculous to release Solr with such a major bug on the
other hand.

I won't release the current artifacts and will respin tomorrow morning EU
time (about 12 hours from now). If someone could backport SOLR-8418 to the
5.4 branch until then, that would be great, otherwise I will do it myself
before building the RC.


[jira] [Commented] (LUCENE-6980) Default to applying deletes when opening NRT reader from writer

2016-01-17 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103936#comment-15103936
 ] 

Robert Muir commented on LUCENE-6980:
-

+1

> Default to applying deletes when opening NRT reader from writer
> ---
>
> Key: LUCENE-6980
> URL: https://issues.apache.org/jira/browse/LUCENE-6980
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6980.patch
>
>
> Today, {{DirectoryReader.open}}, etc., all require you to pass a
> supremely expert {{boolean applyDeletes}}.  I think the vast majority
> of users should just default this to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103928#comment-15103928
 ] 

Mark Miller commented on SOLR-7844:
---

bq. Am I over-thinking it, and this only needs to be handled in 
ZkController.getLeaderProps(), catching NoNodeException and attempting to read 
the props from the parent?

I think so.

bq.  I ask because I'm not sure what role ShardLeaderElectionContextBase plays 
here.

This sets where it will get written out. We don't need to change that, we want 
to write out to the new spot, so I think the above is probably all that needs 
to be done with an initial glance over.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103923#comment-15103923
 ] 

Shai Erera commented on SOLR-7844:
--

Where is the best place to check that in your opinion? If we do something in 
{{ZkStateReader.getLeaderPath()}}, then we cover both 
{{ZkController.getLeaderProps()}} and also {{ShardLeaderElectionContextBase}}. 
As I see it, the latter may also log failures to remove a leader node, while 
attempting to delete "shard1/leader".

So in {{ZkStateReader.getLeaderPath()}}, we can perhaps check if "shard1" is an 
ephemeral node (looking at its {{ephemeralOwner}} value -- 0 means it's not an 
ephemeral node), it means this is a pre-5.4 leader, and we return it as the 
leader path? Otherwise, we return shard1/leader?

Am I over-thinking it, and this only needs to be handled in 
{{ZkController.getLeaderProps()}}, catching {{NoNodeException}} and attempting 
to read the props from the parent? I ask because I'm not sure what role 
{{ShardLeaderElectionContextBase}} plays here.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103909#comment-15103909
 ] 

Mark Miller commented on SOLR-7844:
---

Yeah, 5x needs a little bridge back compat that checks the old location if the 
new one does not exist.

We don't have any tests at all for rolling updates, so it is all a bit hit or 
miss, but I believe that should work out fine.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6980) Default to applying deletes when opening NRT reader from writer

2016-01-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6980:
---
Attachment: LUCENE-6980.patch

Simple rote patch.

> Default to applying deletes when opening NRT reader from writer
> ---
>
> Key: LUCENE-6980
> URL: https://issues.apache.org/jira/browse/LUCENE-6980
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6980.patch
>
>
> Today, {{DirectoryReader.open}}, etc., all require you to pass a
> supremely expert {{boolean applyDeletes}}.  I think the vast majority
> of users should just default this to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6980) Default to applying deletes when opening NRT reader from writer

2016-01-17 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6980:
--

 Summary: Default to applying deletes when opening NRT reader from 
writer
 Key: LUCENE-6980
 URL: https://issues.apache.org/jira/browse/LUCENE-6980
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.x


Today, {{DirectoryReader.open}}, etc., all require you to pass a
supremely expert {{boolean applyDeletes}}.  I think the vast majority
of users should just default this to true.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6971) Remove StorableField, StoredDocument

2016-01-17 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103900#comment-15103900
 ] 

Uwe Schindler commented on LUCENE-6971:
---

Thanks Mike!

> Remove StorableField, StoredDocument
> 
>
> Key: LUCENE-6971
> URL: https://issues.apache.org/jira/browse/LUCENE-6971
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 6.0
>
> Attachments: LUCENE-6971.patch
>
>
> I think this has proven to be an awkward/forced separation, e.g. that doc 
> values are handled as {{StorableField}}s.
> For the 5.x release we had just "kicked the can down the road" by pushing 
> this change off of the branch, making backporting sometimes hard, but I think 
> for 6.x we should just remove it and put the document API back to what we 
> have in 5.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6971) Remove StorableField, StoredDocument

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103898#comment-15103898
 ] 

ASF subversion and git services commented on LUCENE-6971:
-

Commit 1725117 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1725117 ]

LUCENE-6971: remove StorableField, StoredDocument

> Remove StorableField, StoredDocument
> 
>
> Key: LUCENE-6971
> URL: https://issues.apache.org/jira/browse/LUCENE-6971
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 6.0
>
> Attachments: LUCENE-6971.patch
>
>
> I think this has proven to be an awkward/forced separation, e.g. that doc 
> values are handled as {{StorableField}}s.
> For the 5.x release we had just "kicked the can down the road" by pushing 
> this change off of the branch, making backporting sometimes hard, but I think 
> for 6.x we should just remove it and put the document API back to what we 
> have in 5.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6971) Remove StorableField, StoredDocument

2016-01-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6971.

Resolution: Fixed

> Remove StorableField, StoredDocument
> 
>
> Key: LUCENE-6971
> URL: https://issues.apache.org/jira/browse/LUCENE-6971
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 6.0
>
> Attachments: LUCENE-6971.patch
>
>
> I think this has proven to be an awkward/forced separation, e.g. that doc 
> values are handled as {{StorableField}}s.
> For the 5.x release we had just "kicked the can down the road" by pushing 
> this change off of the branch, making backporting sometimes hard, but I think 
> for 6.x we should just remove it and put the document API back to what we 
> have in 5.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 15280 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15280/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.hadoop.MorphlineMapperTest: 
1) Thread[id=15, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)  
   at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.hadoop.MorphlineMapperTest: 
   1) Thread[id=15, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([6665254E4F4BDB63]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=15, 
name=Thread-2, state=TIMED_WAITING, group=TGRP-MorphlineMapperTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)  
   at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=15, name=Thread-2, state=TIMED_WAITING, 
group=TGRP-MorphlineMapperTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1033)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:282)
at org.apache.solr.hadoop.HeartBeater.run(HeartBeater.java:109)
at __randomizedtesting.SeedInfo.seed([6665254E4F4BDB63]:0)


FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
Malformed / non-existent locale:  near: { # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_6665254E4F4BDB63-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 160 #  captureAttr : true # default is false "capture" : [ # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_6665254E4F4BDB63-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 163 #  twitter feed schema "user_friends_count", # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_6665254E4F4BDB63-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 164 "user_location", # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_6665254E4F4BDB63-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 165 "user_description", # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_6665254E4F4BDB63-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 166 "user_statuses_count", # 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/contrib/solr-map-reduce/test/J2/temp/solr.hadoop.MorphlineMapperTest_6665254E4F4BDB63-001/tempDir-001/test-morphlines/solrCellDocumentType

[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103881#comment-15103881
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit 1725112 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1725112 ]

LUCENE-6932: RAMDirectory's IndexInput should always throw EOFE if you seek 
beyond the end of the file and then try to read

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Fix For: Trunk, 5.x
>
> Attachments: LUCENE-6932.patch, issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-01-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6932.

   Resolution: Fixed
Fix Version/s: Trunk
   5.x

Thanks [~stephane campinas]

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Fix For: 5.x, Trunk
>
> Attachments: LUCENE-6932.patch, issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6932) Seek past EOF with RAMDirectory should throw EOFException

2016-01-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103878#comment-15103878
 ] 

ASF subversion and git services commented on LUCENE-6932:
-

Commit 1725111 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1725111 ]

LUCENE-6932: RAMDirectory's IndexInput should always throw EOFE if you seek 
beyond the end of the file and then try to read

> Seek past EOF with RAMDirectory should throw EOFException
> -
>
> Key: LUCENE-6932
> URL: https://issues.apache.org/jira/browse/LUCENE-6932
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: Trunk
>Reporter: Stéphane Campinas
> Attachments: LUCENE-6932.patch, issue6932.patch, testcase.txt
>
>
> In the JUnit test case from the attached file, I call "IndexInput.seek()" on 
> a position past
> EOF. However, there is no EOFException that is thrown.
> To reproduce the error, please use the seed test: 
> -Dtests.seed=8273A81C129D35E2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8560) Add RequestStatusState enum

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-8560:
-
Attachment: SOLR-8560.patch

Patch introduces {{RequestStatusState}} and replaces code which did e.g. 
{{state.equals("completed")}} with {{state == RequestStatusState.COMPLETED}}. I 
also cleaned up {{CollectionsHandler.CollectionOperation.REQUESTSTATUS_OP}} a 
bit.

> Add RequestStatusState enum
> ---
>
> Key: SOLR-8560
> URL: https://issues.apache.org/jira/browse/SOLR-8560
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8560.patch
>
>
> The REQUESTSTATUS API returns a "state" that is currently a String. This 
> issue adds a {{RequestStatusState}} enum with the currently returned 
> constants, for easier integration by clients. For backwards compatibility it 
> parses the returned state in a lowercase form, as well resolve "notfound" to 
> {{NOT_FOUND}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.3 #15: POMs out of sync

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.3/15/

No tests ran.

Build Log:
[...truncated 25062 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/build.xml:742: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/build.xml:231: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/build.xml:410: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/common-build.xml:1673:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/common-build.xml:589:
 Error deploying artifact 'org.apache.lucene:lucene-test-framework:jar': Error 
installing artifact's metadata: Error while deploying metadata: Error 
transferring file

Total time: 10 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 15279 - Failure!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15279/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=11428, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=11430, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=11432, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=11429, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=11431, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=11428, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=11430, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5555 - Failure!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows//
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestQuerySenderListener

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001\init-core-data-001

C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001\init-core-data-001
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001

at __randomizedtesting.SeedInfo.seed([7420FC06C1386113]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:215)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1788 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build\core\test\temp\junit4-J0-20160117_154133_480.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Default case invoked for: 
   [junit4]opcode  = 0, "Node"
   [junit4] Default case invoked for: 
   [junit4]opcode  = 0, "Node"
   [junit4] Default case invoked for: 
   [junit4]opcode  = 200, "Phi"
   [junit4] <<< JVM J0: EOF 

[...truncated 8520 lines...]
   [junit4] Suite: org.apache.solr.core.TestQuerySenderListener
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestQuerySenderListener_7420FC06C1386113-001\init-core-data-001
   [junit4]   2> 1256626 INFO  
(SUITE-TestQuerySenderListener-seed#[7420FC06C1386113]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 1256628 INFO  
(SUITE-TestQuerySenderListener-seed#[7420FC06C1386113]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 1256628 INFO  
(SUITE-TestQuerySenderListener-seed#[7420FC06C1386113]-worker) [] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1'
   [junit4]   2> 1256628 INFO  
(SUITE-TestQuerySenderListener-seed#[7420FC06C1386113]-worker) [] 
o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 1256628 INFO  
(SUITE-TestQuerySenderListener-seed#[7420FC06C1386113]-worker) [] 
o.a.s.c.SolrResourceLoader using system property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr
   [junit4]   2> 1256629 INFO  

[jira] [Created] (SOLR-8560) Add RequestStatusState enum

2016-01-17 Thread Shai Erera (JIRA)
Shai Erera created SOLR-8560:


 Summary: Add RequestStatusState enum
 Key: SOLR-8560
 URL: https://issues.apache.org/jira/browse/SOLR-8560
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Shai Erera
Assignee: Shai Erera
Priority: Minor
 Fix For: 5.5, Trunk


The REQUESTSTATUS API returns a "state" that is currently a String. This issue 
adds a {{RequestStatusState}} enum with the currently returned constants, for 
easier integration by clients. For backwards compatibility it will parse the 
returned state in a lowercase form, as well resolve "notfound" to {{NOT_FOUND}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8560) Add RequestStatusState enum

2016-01-17 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-8560:
-
Description: The REQUESTSTATUS API returns a "state" that is currently a 
String. This issue adds a {{RequestStatusState}} enum with the currently 
returned constants, for easier integration by clients. For backwards 
compatibility it parses the returned state in a lowercase form, as well resolve 
"notfound" to {{NOT_FOUND}}.  (was: The REQUESTSTATUS API returns a "state" 
that is currently a String. This issue adds a {{RequestStatusState}} enum with 
the currently returned constants, for easier integration by clients. For 
backwards compatibility it will parse the returned state in a lowercase form, 
as well resolve "notfound" to {{NOT_FOUND}}.)

> Add RequestStatusState enum
> ---
>
> Key: SOLR-8560
> URL: https://issues.apache.org/jira/browse/SOLR-8560
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
>
> The REQUESTSTATUS API returns a "state" that is currently a String. This 
> issue adds a {{RequestStatusState}} enum with the currently returned 
> constants, for easier integration by clients. For backwards compatibility it 
> parses the returned state in a lowercase form, as well resolve "notfound" to 
> {{NOT_FOUND}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103781#comment-15103781
 ] 

Shai Erera commented on SOLR-7844:
--

I think that if we also check for the leader under "/shard1" as a fallback, the 
situation will resolve itself? I.e. the new 5.4 nodes will come up finding a 
leader. When that leader dies, the "shard1" EPHEMERAL node will be deleted, and 
a 5.4 node will create the proper structure in ZK? What do you think?

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that could cause two hosts to become 
> leader.
> Scenario:
> a three machine cluster, all of the machines are restarting at approximately 
> the same time.
> The first machine starts, writes a leader_elect ephemeral node, it's the only 
> candidate in the election so it wins and starts the leadership process. As it 
> knows it has peers, it begins to block waiting for the peers to arrive.
> During this period of blocking[1] the ZK connection drops and the session 
> expires.
> A new ZK session is established, and ElectionContext.cancelElection is 
> called. Then register() is called and a new set of leader_elect ephemeral 
> nodes are created.
> During the period between the ZK session expiring, and new set of 
> leader_elect nodes being created the second machine starts.
> It creates its leader_elect ephemeral nodes, as there are no other nodes it 
> wins the election and starts the leadership process. As its still missing one 
> of its peers, it begins to block waiting for the third machine to join.
> There is now a race between machine1 & machine2, both of whom think they are 
> the leader.
> So far, this isn't too bad, because the machine that loses the race will fail 
> when it tries to create the /collection/name/leader/shard1 node (as it 
> already exists), and will rejoin the election.
> While this is happening, machine3 has started and has queued for leadership 
> behind machine2.
> If the loser of the race is machine2, when it rejoins the election it cancels 
> the current context, deleting it's leader_elect ephemeral nodes.
> At this point, machine3 believes it has become leader (the watcher it has on 
> the leader_elect node fires), and it runs the LeaderElector::checkIfIAmLeader 
> method. This method DELETES the current /collection/name/leader/shard1 node, 
> then starts the leadership process (as all three machines are now running, it 
> does not block to wait).
> So, machine1 won the race with machine2 and declared its leadership and 
> created the nodes. However, machine3 has just deleted them, and recreated 
> them for itself. So machine1 and machine3 both believe they are the leader.
> I am thinking that the fix should be to cancel & close all election contexts 
> immediately on reconnect (we do cancel them, however it's run serially which 
> has blocking issues, and just canceling does not cause the wait loop to 
> exit). That election context logic already has checks on the closed flag, so 
> they should exit if they see it has been closed.
> I'm working on a patch for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.4 #17: POMs out of sync

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.4/17/

No tests ran.

Build Log:
[...truncated 25343 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/build.xml:808: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/build.xml:297: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/lucene/build.xml:416: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/lucene/common-build.xml:2248:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/lucene/analysis/build.xml:122:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/lucene/common-build.xml:1676:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.4/lucene/common-build.xml:592:
 Error deploying artifact 'org.apache.lucene:lucene-analyzers-uima:jar': Error 
deploying artifact: Error transferring file

Total time: 27 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [VOTE] Release Lucene/Solr 5.4.1 RC1

2016-01-17 Thread Adrien Grand
Le sam. 16 janv. 2016 à 17:03, Ramkumar R. Aiyengar 
a écrit :

> I missed the initial RC, but if we are going to re-spin (and looks like
> Yonik has a patch up now), I would like to back-port SOLR-8418.
>

+1 to merge to 5.4. We'll either respin or do a 5.4.2 shortly so this fix
will reach our users soon.


[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1659: POMs out of sync

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1659/

No tests ran.

Build Log:
[...truncated 25847 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:800: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:299: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:417:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2154:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1648:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:568:
 Error deploying artifact 'org.apache.lucene:lucene-facet:jar': Error deploying 
artifact: Error transferring file

Total time: 25 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 438 - Failure

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/438/

No tests ran.

Build Log:
[...truncated 53166 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (9.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.0-src.tgz...
   [smoker] 28.7 MB in 0.04 sec (675.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.tgz...
   [smoker] 63.4 MB in 0.09 sec (673.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.0.zip...
   [smoker] 73.8 MB in 0.09 sec (790.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6170 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (44.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.5.0-src.tgz...
   [smoker] 37.5 MB in 0.05 sec (698.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.tgz...
   [smoker] 130.2 MB in 0.18 sec (711.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.0.zip...
   [smoker] 138.1 MB in 0.19 sec (713.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.5.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 

[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 338 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/338/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionReloadTest.testReloadedLeaderStateAfterZkSessionLoss

Error Message:
Shards in the state does not match what we set:3 vs 4

Stack Trace:
java.lang.AssertionError: Shards in the state does not match what we set:3 vs 4
at 
__randomizedtesting.SeedInfo.seed([84FA3AFD4CF2D5D:F36118F272225E08]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:398)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:311)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.security.PKIAuthenticationIntegrationTest.testPkiAuth

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([84FA3AFD4CF2D5D:38F13AFE3479AAFC]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecover

[jira] [Updated] (LUCENE-5687) Split off SinkTokenStream from TeeSinkTokenFilter (was add PrefillTokenStream ...)

2016-01-17 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5687:
-
Summary: Split off SinkTokenStream from TeeSinkTokenFilter (was add 
PrefillTokenStream ...)  (was: Add PrefillTokenStream in analysis common module)

> Split off SinkTokenStream from TeeSinkTokenFilter (was add PrefillTokenStream 
> ...)
> --
>
> Key: LUCENE-5687
> URL: https://issues.apache.org/jira/browse/LUCENE-5687
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.9
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: 4.9
>
> Attachments: LUCENE-5687.patch, LUCENE-5687.patch, LUCENE-5687.patch, 
> LUCENE-5687.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5687) Add PrefillTokenStream in analysis common module

2016-01-17 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5687:
-
Attachment: LUCENE-5687.patch

Patch of 17 Jan 2016.

Split off SinkTokenStream and TokenStates from TeeSinkTokenFilter, so they can 
be reused elsewhere.
This also adds a close() method to TeeSinkTokenFilter and some javadocs  there.

Should TeeSinkTokenFilter.incrementToken() be a final method?

> Add PrefillTokenStream in analysis common module
> 
>
> Key: LUCENE-5687
> URL: https://issues.apache.org/jira/browse/LUCENE-5687
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 4.9
>Reporter: Paul Elschot
>Priority: Minor
> Fix For: 4.9
>
> Attachments: LUCENE-5687.patch, LUCENE-5687.patch, LUCENE-5687.patch, 
> LUCENE-5687.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 342 - Still Failing!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/342/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([B15629E198DF5F36:580C92D90646CF9E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr

[JENKINS] Lucene-Solr-NightlyTests-5.4 - Build # 18 - Still Failing

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.4/18/

3 tests failed.
FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun

Error Message:
Failed on local exception: java.io.IOException: Connection reset by peer; Host 
Details : local host is: "lucene1-us-west/10.41.0.5"; destination host is: 
"lucene1-us-west.apache.org":58873; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: Connection 
reset by peer; Host Details : local host is: "lucene1-us-west/10.41.0.5"; 
destination host is: "lucene1-us-west.apache.org":58873; 
at 
__randomizedtesting.SeedInfo.seed([CF4E7F99D80E3DFF:C11CCB97D9980FF0]:0)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy45.getClusterMetrics(Unknown Source)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy46.getClusterMetrics(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:461)
at 
org.apache.hadoop.mapred.ResourceMgrDelegate.getClusterMetrics(ResourceMgrDelegate.java:151)
at 
org.apache.hadoop.mapred.YARNRunner.getClusterMetrics(YARNRunner.java:179)
at 
org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:246)
at org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:719)
at org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:717)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:717)
at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:645)
at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:608)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun(MorphlineBasicMiniMRTest.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Randomiz

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2970 - Failure!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2970/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.hadoop.MorphlineMapperTest.testMapper

Error Message:
Malformed / non-existent locale:  near: { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 202 "lowernames" : true, # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 158 "solrLocator" : { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 158 "collection" : "collection1" }, # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 201 #  various java.text.SimpleDateFormat #  xpath : 
"/xhtml:html/xhtml:body/xhtml:div/descendant:node()" "uprefix" : 
"ignored_", # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 207 #  Tika parsers to be registered. If multiple parsers support the same 
MIME type,  #  the parser is chosen that is closest to the bottom in this 
list: "parsers" : [ # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 208 { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 208 "parser" : "org.apache.tika.parser.asm.ClassParser" }, 
# 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 211 #  { parser : org.apache.tika.parser.AutoDetectParser }
   #  { parser : org.gagravarr.tika.OggParser, 
additionalSupportedMimeTypes : [audio/ogg] } { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 211 "parser" : "org.gagravarr.tika.FlacParser" }, 
# 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 212 { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 212 "parser" : "org.apache.tika.parser.audio.AudioParser" 
}, # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 213 { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 213 "parser" : "org.apache.tika.parser.audio.MidiParser" 
}, # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 214 { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 214 "parser" : "org.apache.tika.parser.crypto.Pkcs7Parser" 
}, # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108C76AF-001/tempDir-001/test-morphlines/solrCellDocumentTypes.conf:
 215 { # 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/contrib/solr-map-reduce/test/J1/temp/solr.hadoop.MorphlineMapperTest_C92266C8108

[JENKINS] Lucene-Solr-5.4-Linux (32bit/jdk1.7.0_80) - Build # 397 - Failure!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/397/
Java: 32bit/jdk1.7.0_80 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:54555/_vgl/collection1: Bad Request
request: 
http://127.0.0.1:34323/_vgl/collection1/update?update.chain=distrib-dup-test-chain-explicit&update.distrib=TOLEADER&distrib.from=http%3A%2F%2F127.0.0.1%3A54555%2F_vgl%2Fcollection1%2F&wt=javabin&version=2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:54555/_vgl/collection1: Bad Request



request: 
http://127.0.0.1:34323/_vgl/collection1/update?update.chain=distrib-dup-test-chain-explicit&update.distrib=TOLEADER&distrib.from=http%3A%2F%2F127.0.0.1%3A54555%2F_vgl%2Fcollection1%2F&wt=javabin&version=2
at 
__randomizedtesting.SeedInfo.seed([639B3709D4E0FCE:8E6D8CAA33B26236]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:512)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:645)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:356)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoSha

Re: [VOTE] Release Lucene/Solr 5.4.1 RC1

2016-01-17 Thread Shai Erera
I just posted a problem I ran into, while upgrading from Solr 5.3 to 5.4 (
https://issues.apache.org/jira/browse/SOLR-7844). I don't know if it should
be another reason to hold on w/ 5.4.1. If there is a workaround, then it
shouldn't. But otherwise (I hope I'm drawing the right conclusions here),
SOLR-7844 fixed leader election in such a way that prevents upgrading from
5.3 (unless you take down the entire Solr cluster).

Shai

On Sun, Jan 17, 2016 at 5:00 AM Mark Miller  wrote:

> Always nice to be patient and accommodating for reroll 1. You know, since
> we are all friends here. Best to get picky later on.
>
> Mark
> On Sat, Jan 16, 2016 at 9:27 PM david.w.smi...@gmail.com <
> david.w.smi...@gmail.com> wrote:
>
>> Clearly to me, the facet bug is a critical one to get into the release
>> (and so is index corruption) and I'm glad it was caught in time to make it
>> into this release.  I think these are the only "critical" issues I'm aware
>> of so I'm not concerned about "the floodgates opening" unless I hear of
>> other clearly non-critical issues jump on too.
>>
>> Thanks for doing the release Adrien!
>> ~ David
>>
>> On Sat, Jan 16, 2016 at 8:24 PM Erick Erickson 
>> wrote:
>>
>>> bq: How is this issue critical?
>>>
>>> Well, a heck of a lot of clients I deal with use facets to do analysis
>>> of their corpi, think of it as "poor man's stats". To tell them that
>>> "well, the facet counts will not be accurate sometimes" is a tough
>>> pill to swallow.
>>>
>>> Where we're at with this is essentially telling Solr users "skip all
>>> 5.3 and 5.4 versions if you want your facet counts to be accurate".
>>> Which sets the bar for releasing a 5.5 I guess.
>>>
>>> That said I can deal with some fuzziness on facet counts, but I really
>>> can't deal with index corruption so I guess it's a judgement call.
>>>
>>>
>>>
>>> On Sat, Jan 16, 2016 at 11:41 AM, Robert Muir  wrote:
>>> > I don't think we should open the floodgates. How is this issue
>>> critical?
>>> >
>>> > We need to get the index corruption fix out. +1 to release as is.
>>> >
>>> > On Sat, Jan 16, 2016 at 11:03 AM, Ramkumar R. Aiyengar
>>> >  wrote:
>>> >> I missed the initial RC, but if we are going to re-spin (and looks
>>> like
>>> >> Yonik has a patch up now), I would like to back-port SOLR-8418.
>>> Should be
>>> >> fairly benign, let me know if there are any concerns..
>>> >>
>>> >> On Fri, Jan 15, 2016 at 5:59 PM, Adrien Grand 
>>> wrote:
>>> >>>
>>> >>> Thanks Yonik, let's see what this bug boils down to and how quickly
>>> we can
>>> >>> get it fixed.
>>> >>>
>>> >>> Le ven. 15 janv. 2016 à 17:15, Yonik Seeley  a
>>> écrit :
>>> 
>>>  We've discovered a very serious bug, currently unknown how deep it
>>>  runs, but may make sense to respin if we can get it ironed out
>>> quickly
>>>  enough:
>>>  https://issues.apache.org/jira/browse/SOLR-8496
>>> 
>>>  -Yonik
>>> 
>>> 
>>>  On Thu, Jan 14, 2016 at 5:41 AM, Adrien Grand 
>>> wrote:
>>>  > Please vote for the RC1 release candidate for Lucene/Solr 5.4.1
>>>  >
>>>  > The artifacts can be downloaded from:
>>>  >
>>>  >
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.1-RC1-rev1724447/
>>>  >
>>>  > You can run the smoke tester directly with this command:
>>>  > python3 -u dev-tools/scripts/smokeTestRelease.py
>>>  >
>>>  >
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.1-RC1-rev1724447/
>>>  >
>>>  > The smoke tester already passed for me both with the local and
>>> remote
>>>  > artifacts, so here is my +1.
>>> 
>>> 
>>> -
>>>  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>>  For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Not sent from my iPhone or my Blackberry or anyone else's
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
> --
> - Mark
> about.me/markrmiller
>


[jira] [Commented] (SOLR-7844) Zookeeper session expiry during shard leader election can cause multiple leaders.

2016-01-17 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103645#comment-15103645
 ] 

Shai Erera commented on SOLR-7844:
--

[~markrmil...@gmail.com] this seems to break upgrading existing 5x (e.g. 5.3) 
clusters to 5.4, unless I missed a "migration" step. If you're doing a rolling 
upgrade, such that you take one of the nodes down, replace the JARs to 5.4 and 
restart the node, you'll see such exceptions:

{noformat}
org.apache.solr.common.SolrException: Error getting leader from zk for shard 
shard1
at org.apache.solr.cloud.ZkController.getLeader(ZkController.java:1034)
at org.apache.solr.cloud.ZkController.register(ZkController.java:940)
at org.apache.solr.cloud.ZkController.register(ZkController.java:883)
at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:184)
at org.apache.solr.core.ZkContainer.registerInZk(ZkContainer.java:213)
at org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:696)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:750)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:716)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:623)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:204)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:184)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:438)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
...
Caused by: org.apache.solr.common.SolrException: Could not get leader props
at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1081)
at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1045)
at org.apache.solr.cloud.ZkController.getLeader(ZkController.java:1001)
... 35 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for /collections/acg-test-1/leaders/shard1/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1059)
{noformat}

When the 5.4 nodes come up, they don't find {{/collections/coll/shard/leader1}} 
path and fail. I am not quite sure how to recover this though, since the 
cluster has a mixture of 5.3 and 5.4 nodes. I cannot create 
{{.../shard1/leader}} since {{../shard1}} is an EPHEMERAL node and therefore 
can't create child nodes. I am not sure what will happen if I delete 
"../shard1" and recreate it as non EPHEMERAL, will the old 5.3 nodes work? I 
also need to ensure that the new 5.4 node doesn't become the leader if it 
wasn't already.

Perhaps a fix would be for 5.4 to fallback to read the leader info from 
"../shard1"? Then when the last 5.3 node is down, the leader will be attempted 
by a 5.4 node which will recreate the leader path according to the 5.4 format? 
Should this have been a zk version change?

I'd appreciate some guidance here.

> Zookeeper session expiry during shard leader election can cause multiple 
> leaders.
> -
>
> Key: SOLR-7844
> URL: https://issues.apache.org/jira/browse/SOLR-7844
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.4
>Reporter: Mike Roberts
>Assignee: Mark Miller
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-7844-5x.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch, 
> SOLR-7844.patch, SOLR-7844.patch, SOLR-7844.patch
>
>
> If the ZooKeeper session expires for a host during shard leader election, the 
> ephemeral leader_elect nodes are removed. However the threads that were 
> processing the election are still present (and could believe the host won the 
> election). They will then incorrectly create leader nodes once a new 
> ZooKeeper session is established.
> This introduces a subtle race condition that 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3922 - Still Failing

2016-01-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3922/

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
Failed to create backup

Stack Trace:
java.lang.AssertionError: Failed to create backup
at 
__randomizedtesting.SeedInfo.seed([DFB6C4D531D8ABD0:9E3DE4B01666589F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.CheckBackupStatus.fetchStatus(CheckBackupStatus.java:51)
at 
org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup(TestReplicationHandlerBackup.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10055 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandlerBackup
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandlerBackup_DFB6C4D531D8ABD0-001

[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-01-17 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103631#comment-15103631
 ] 

Upayavira commented on SOLR-8542:
-

Why mstore and fstore on the schema api? Can't we have schema/feature-store and 
schema/model-store? They are way more self-explanatory and make the LTR stuff 
that little bit more accessible.

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: README.md, README.md, SOLR-8542-branch_5x.patch, 
> SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously presented by the authors at Lucene/Solr 
> Revolution 2015 ( 
> http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp
>  ).
> The attached code was jointly worked on by Joshua Pantony, Michael Nilsson, 
> and Diego Ceccarelli.
> Any chance this could make it into a 5x release? We've also attached 
> documentation as a github MD file, but are happy to convert to a desired 
> format.
> h3. Test the plugin with solr/example/techproducts in 6 steps
> Solr provides some simple example of indices. In order to test the plugin 
> with 
> the techproducts example please follow these steps
> h4. 1. compile solr and the examples 
> cd solr
> ant dist
> ant example
> h4. 2. run the example
> ./bin/solr -e techproducts 
> h4. 3. stop it and install the plugin:
>
> ./bin/solr stop
> mkdir example/techproducts/solr/techproducts/lib
> cp build/contrib/ltr/lucene-ltr-6.0.0-SNAPSHOT.jar 
> example/techproducts/solr/techproducts/lib/
> cp contrib/ltr/example/solrconfig.xml 
> example/techproducts/solr/techproducts/conf/
> h4. 4. run the example again
> 
> ./bin/solr -e techproducts
> h4. 5. index some features and a model
> curl -XPUT 'http://localhost:8983/solr/techproducts/schema/fstore'  
> --data-binary "@./contrib/ltr/example/techproducts-features.json"  -H 
> 'Content-type:application/json'
> curl -XPUT 'http://localhost:8983/solr/techproducts/schema/mstore'  
> --data-binary "@./contrib/ltr/example/techproducts-model.json"  -H 
> 'Content-type:application/json'
> h4. 6. have fun !
> *access to the default feature store*
> http://localhost:8983/solr/techproducts/schema/fstore/_DEFAULT_ 
> *access to the model store*
> http://localhost:8983/solr/techproducts/schema/mstore
> *perform a query using the model, and retrieve the features*
> http://localhost:8983/solr/techproducts/query?indent=on&q=test&wt=json&rq={!ltr%20model=svm%20reRankDocs=25%20efi.query=%27test%27}&fl=*,[features],price,score,name&fv=true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15572 - Failure!

2016-01-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15572/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseParallelGC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=3107, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=3104, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=3108, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=3106, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=3105, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=3107, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
at 
java.util.concurrent.ScheduledThreadPoolE