[jira] [Commented] (SOLR-8059) NPE distributed DebugComponent

2015-09-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934707#comment-14934707
 ] 

Shalin Shekhar Mangar commented on SOLR-8059:
-

Yes, Jeff, I suspect the same looking at how only fl=id,score cause this.

> NPE distributed DebugComponent
> --
>
> Key: SOLR-8059
> URL: https://issues.apache.org/jira/browse/SOLR-8059
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.4
>
>
> The following URL select?debug=true&q=*:*&fl=id,score yields
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.DebugComponent.finishStage(DebugComponent.java:229)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I can reproduce it everytime. Strange enough fl=*,score, or any other content 
> field does not! I have seen this happening in Highlighter as well on the same 
> code path. It makes little sense, how would fl influence that piece of code, 
> the id is requested in fl afterall.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8065) Improve the start/stop script shutdown logic to avoid forcefully killing Solr

2015-09-28 Thread Ere Maijala (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934685#comment-14934685
 ] 

Ere Maijala commented on SOLR-8065:
---

So, as far as I can see the Windows solr.cmd is already using SolrCLI at 
startup but the unix script isn't. Is there a reason for that? Could that be a 
follow-up issue or should everything be cleaned up here?

> Improve the start/stop script shutdown logic to avoid forcefully killing Solr
> -
>
> Key: SOLR-8065
> URL: https://issues.apache.org/jira/browse/SOLR-8065
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3
> Environment: Unix/Linux
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch
> Attachments: solr_stop_patch.diff
>
>
> Currently the start/stop script only waits 5 seconds before forcefully 
> killing the Solr process. In our experience this is not enough for graceful 
> shutdown on an active SolrCloud node. I propose making the shutdown wait for 
> up to 30 seconds and check the process status every second to avoid 
> unnecessary delay.
> I will attach a patch to accomplish this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2712 - Failure!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2712/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:63920/i_p

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:63920/i_p
at 
__randomizedtesting.SeedInfo.seed([AA60EA13FA354370:2234D5C954C92E88]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:585)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:490)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:124)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFail

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b78) - Build # 14338 - Still Failing!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14338/
Java: 64bit/jdk1.9.0-ea-b78 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest: 1) Thread[id=243, 
name=TEST-CloudSolrClientTest.test-seed#[1A736900F2D6E6B7]-EventThread, 
state=WAITING, group=TGRP-CloudSolrClientTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
2) Thread[id=242, 
name=TEST-CloudSolrClientTest.test-seed#[1A736900F2D6E6B7]-SendThread(127.0.0.1:60210),
 state=RUNNABLE, group=TGRP-CloudSolrClientTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.client.solrj.impl.CloudSolrClientTest: 
   1) Thread[id=243, 
name=TEST-CloudSolrClientTest.test-seed#[1A736900F2D6E6B7]-EventThread, 
state=WAITING, group=TGRP-CloudSolrClientTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
   2) Thread[id=242, 
name=TEST-CloudSolrClientTest.test-seed#[1A736900F2D6E6B7]-SendThread(127.0.0.1:60210),
 state=RUNNABLE, group=TGRP-CloudSolrClientTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
at __randomizedtesting.SeedInfo.seed([1A736900F2D6E6B7]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=242, 
name=TEST-CloudSolrClientTest.test-seed#[1A736900F2D6E6B7]-SendThread(127.0.0.1:60210),
 state=TIMED_WAITING, group=TGRP-CloudSolrClientTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=242, 
name=TEST-CloudSolrClientTest.test-seed#[1A736900F2D6E6B7]-SendThread(127.0.0.1:60210),
 state=TIMED_WAITING, group=TGRP-CloudSolrClientTest]
at java.lang.Thread.sleep(Native Method)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)
at __randomizedtesting.SeedInfo.seed([1A736900F2D6E6B7]:0)


FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1A736900F2D6E6B7:922756DA5C2A8B4F]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.allTests(CloudSolrClientTest.java:231)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:504)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedExc

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_60) - Build # 5164 - Still Failing!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5164/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([AFA47D456067E903]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:354)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:675)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:150)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([AFA47D456067E903]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:354)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:675)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtes

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 970 - Still Failing

2015-09-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/970/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=14692, name=collection3, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=14692, name=collection3, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:48734: Could not find collection : 
awholynewstresscollection_collection3_0
at __randomizedtesting.SeedInfo.seed([76E638E196C89A23]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 11311 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest_76E638E196C89A23-001/init-core-data-001
   [junit4]   2> 1517093 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 1517094 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 1517133 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 1517144 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 1517148 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log jetty-6.1.26
   [junit4]   2> 1517170 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_34175_hdfs.1x4tjc/webapp
   [junit4]   2> 1517297 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log NO JSP Support for /, did not find 
org.apache.jasper.servlet.JspServlet
   [junit4]   2> 1517634 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34175
   [junit4]   2> 1517714 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 1517715 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log jetty-6.1.26
   [junit4]   2> 1517728 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_56528_datanode.h1snxb/webapp
   [junit4]   2> 1517870 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log NO JSP Support for /, did not find 
org.apache.jasper.servlet.JspServlet
   [junit4]   2> 1518205 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:56528
   [junit4]   2> 1518272 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[76E638E196C89A23]-worker) [
] o.a.

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b78) - Build # 14337 - Failure!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14337/
Java: 32bit/jdk1.9.0-ea-b78 -client -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=7584, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=7585, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=7587, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=7588, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=7586, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=7584, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=7585, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurren

[jira] [Updated] (SOLR-6168) enhance collapse QParser so that "group head" documents can be selected by more complex sort options

2015-09-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6168:
---
Attachment: SOLR-6168.patch


bq. Does it make sense to include SOLR-6345 in this work?

I don't think so?

# Nothing about this improvement relates to faceting or tagging/excluding, nor 
should any of the changes impact any existing code related to the 
faceting/tagging/excluding (unless i'm missing something, that's all external 
to the implemenation of the CollapseFilter)
# SOLR-6345 doesn't even seem to even have an identified root cause, so trying 
to fold in a fix for that with this issue seems like it would just 
unneccessarily stall this improvement.



Updated patch...

* added randomized test coverage of nullPolicy + sort localparam
* updated randomized test with a (verification) work around for SOLR-8082
* added testing of sort param with elevation componanet
* added error checking + tests for tying to use more then one of min/max/sort 
localparams
* made all of CollapsingQParserPlugin's nested classes static
** not specific to this issue, but a worthwhile code cleanup -- there was no 
reason for them not to be static before and it was really bugging me while 
reading/debugging
* created a new GroupHeadSelector class to encapsulate min vs max vs sort 
options
** refactored existing code to pass down GroupHeadSelector instances instead of 
"String min|max" and "boolean max" args
* cleanedup & refactored the needsScores init logic to use existing helper 
methods in ReturnFields & SortSpec and fixed it to ensure the IndexSearcher 
flag would always be set
** solves the case of a 'sort' local parmam needing scores even nothing about 
top level request does (ie: no scores used in top sort or fl)
* refactored existing "cscore" init logic into a helper method that inspects 
GroupHeadSelector

...that last bit of refactoring was done with the intention of re-using the 
cscore method/logic to suport (group head selecting) Sorts that included a 
function which include/wrap "cscore()" ... but the basic tests i'd tried to 
write to cover this case didn't work, and the more i looked into it and thought 
about it i realized this is actualy very tricky.  The way "Sort" objects are 
rewritten relative to a Searcher delegates the rewriting to each individual 
SortField.  If a SortField is rewritable, then it internally does it's 
rewriting _and constructs a new clean (Map) context_ in the individual 
SortField clauses.  Even if we wanted to keep track of all the maps from every 
SortField and put the same CollapseScore object in them, there's no 
uniform/generic way to access those context Maps from an arbitrary SortField.  
I don't have any clean sollutions in mind, so for now i've punted on this and 
made the code throw an explicit error if you try to use "cscore()" with the 
sort local param.


I think this patch is pretty much ready to go, but i want to sleep on it and 
give it at least one more full read through before committing.



> enhance collapse QParser so that "group head" documents can be selected by 
> more complex sort options
> 
>
> Key: SOLR-6168
> URL: https://issues.apache.org/jira/browse/SOLR-6168
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.7.1, 4.8.1
>Reporter: Umesh Prasad
>Assignee: Joel Bernstein
> Attachments: CollapsingQParserPlugin-6168.patch.1-1stcut, 
> SOLR-6168-group-head-inconsistent-with-sort.patch, SOLR-6168.patch, 
> SOLR-6168.patch, SOLR-6168.patch, SOLR-6168.patch
>
>
> The fundemental goal of this issue is add additional support to the 
> CollapseQParser so that as an alternative to the existing min/max localparam 
> options, more robust sort syntax can be used to sort on multiple criteria 
> when selecting the "group head" documents used to represent each collapsed 
> group.
> Since support for arbitrary, multi-clause, sorting is almost certainly going 
> to require more RAM then the existing min/max functionaly, this new 
> functionality should be in addition to the existing min/max localparam 
> implementation, not a replacement of it.
> (NOTE: early comments made in this jira may be confusing in historical 
> context due to the way this issue was originally filed as a bug report)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 87 - Still Failing!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/87/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([97806D422E90CD63:D92318913F4BDC73]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:836)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluat

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 85 - Still Failing!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/85/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([52D7198F3F73812C:1C746C5C2EA8903C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:836)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$

[jira] [Commented] (LUCENE-6664) Replace SynonymFilter with SynonymGraphFilter

2015-09-28 Thread Matt Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934080#comment-14934080
 ] 

Matt Davis commented on LUCENE-6664:


Could this be a candidate for 6.0 (breaking back-compat)?

> Replace SynonymFilter with SynonymGraphFilter
> -
>
> Key: LUCENE-6664
> URL: https://issues.apache.org/jira/browse/LUCENE-6664
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-6664.patch, LUCENE-6664.patch, LUCENE-6664.patch, 
> LUCENE-6664.patch, usa.png, usa_flat.png
>
>
> Spinoff from LUCENE-6582.
> I created a new SynonymGraphFilter (to replace the current buggy
> SynonymFilter), that produces correct graphs (does no "graph
> flattening" itself).  I think this makes it simpler.
> This means you must add the FlattenGraphFilter yourself, if you are
> applying synonyms during indexing.
> Index-time syn expansion is a necessarily "lossy" graph transformation
> when multi-token (input or output) synonyms are applied, because the
> index does not store {{posLength}}, so there will always be phrase
> queries that should match but do not, and then phrase queries that
> should not match but do.
> http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html
> goes into detail about this.
> However, with this new SynonymGraphFilter, if instead you do synonym
> expansion at query time (and don't do the flattening), and you use
> TermAutomatonQuery (future: somehow integrated into a query parser),
> or maybe just "enumerate all paths and make union of PhraseQuery", you
> should get 100% correct matches (not sure about "proper" scoring
> though...).
> This new syn filter still cannot consume an arbitrary graph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-28 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: SOLR-8030.patch

The update chain to use in case of log replay or peersync should be: 

DistributedUpdateProcessor => RunUpdateProcessor

The DistributedUpdateProcessor is needed in case of version update.
The update chain should be kept in order to be able to call the finish() method 
at the end of log replay or peer sync.

This patch contains updated tests: TestRecovery and PeerSyncTest which check 
the (bad) usage of default route. It cannot be called anymore.  

It contains a fix as well.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-28 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: (was: SOLR-8030-test.patch)

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b78) - Build # 14334 - Failure!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14334/
Java: 64bit/jdk1.9.0-ea-b78 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterBase

Error Message:
21 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestMiniSolrCloudClusterBase: 1) Thread[id=1701, 
name=Scheduler-727566654, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudClusterBase] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)2) Thread[id=1747, 
name=zkCallback-250-thread-1, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudClusterBase] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=1675, 
name=qtp1403818499-1675-selector-ServerConnectorManager@6c44b0e2/0, 
state=RUNNABLE, group=TGRP-TestMiniSolrCloudClusterBase] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=1677, 
name=qtp1403818499-1677-selector-ServerConnectorManager@6c44b0e2/2, 
state=RUNNABLE, group=TGRP-TestMiniSolrCloudClusterBase] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=1679, 
name=qtp1403818499-1679-acceptor-0@3ad46cc6-ServerConnector@4c962fd6{HTTP/1.1}{127.0.0.1:35604},
 state=RUNNABLE, group=TGRP-TestMiniSolrCloudClusterBase] at 
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) 
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) 
at 
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:377)   
  at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:500)
 at 
org.eclipse.jetty.util.

[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-28 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933642#comment-14933642
 ] 

Ludovic Boutros commented on SOLR-8030:
---

I'm trying to fix this issue. 
But it seems to be even more complicated.

I have added the update chain name to the transaction log for the following 
operations:
add, delete, deleteByQuery

The 'finish' call can be done on each update chain used during the log replay.

But, actually commits are ignored and a final commit is fired at the end.
But if the update logic of the chain does something during the commit it will 
be ignored and it seems to be tricky to improve this.

The (de)serialization of the commands are done with element positions,
so, it's not really easy to add new elements to the transaction log. These 
positions must be updated in multiple places. Perhaps it should use at least 
some constant values...

Another thing, it seems that PeerSync uses the same (de)serialization, and is 
affected by the same issue. The bad thing is that the code is duplicated. It 
will have to take the update chain in account too.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8059) NPE distributed DebugComponent

2015-09-28 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933631#comment-14933631
 ] 

Jeff Wartes commented on SOLR-8059:
---

I've seen this too. I assumed it was related to 
https://issues.apache.org/jira/browse/SOLR-1880, but I've never investigated.


> NPE distributed DebugComponent
> --
>
> Key: SOLR-8059
> URL: https://issues.apache.org/jira/browse/SOLR-8059
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.4
>
>
> The following URL select?debug=true&q=*:*&fl=id,score yields
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.DebugComponent.finishStage(DebugComponent.java:229)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I can reproduce it everytime. Strange enough fl=*,score, or any other content 
> field does not! I have seen this happening in Highlighter as well on the same 
> code path. It makes little sense, how would fl influence that piece of code, 
> the id is requested in fl afterall.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_60) - Build # 5163 - Failure!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5163/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([1AE3FCF27465F14]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([1AE3FCF27465F14]:0)




Build Log:
[...truncated 11295 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandlerBackup
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\init-core-data-001
   [junit4]   2> 2726804 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.SolrTestCaseJ4 ###Starting testBackupOnCommit
   [junit4]   2> 2726805 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\collection1
   [junit4]   2> 2726814 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 2726816 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6c3b5671{/solr,null,AVAILABLE}
   [junit4]   2> 2726817 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.e.j.s.ServerConnector Started 
ServerConnector@44e70b16{HTTP/1.1}{127.0.0.1:51724}
   [junit4]   2> 2726818 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.e.j.s.Server Started @2726870ms
   [junit4]   2> 2726818 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\collection1\data,
 hostContext=/solr, hostPort=51724}
   [junit4]   2> 2726818 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
sun.misc.Launcher$AppClassLoader@4e0e2f2a
   [junit4]   2> 2726818 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\'
   [junit4]   2> 2726831 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.SolrXmlConfig Loading container configuration from 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\solr.xml
   [junit4]   2> 2726834 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.CoresLocator Config-defined core root directory: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\.
   [junit4]   2> 2726834 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.CoreContainer New CoreContainer 1217401717
   [junit4]   2> 2726834 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\]
   [junit4]   2> 2726834 INFO  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.CoreContainer loading shared library: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestReplicationHandlerBackup_1AE3FCF27465F14-001\solr-instance-001\lib
   [junit4]   2> 2726834 WARN  
(TEST-TestReplicationHandlerBackup.testBackupOnCommit-seed#[1AE3FCF27465F14]) [ 
   ] o.a.s.c.SolrResourceLoader

[jira] [Commented] (SOLR-8065) Improve the start/stop script shutdown logic to avoid forcefully killing Solr

2015-09-28 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933591#comment-14933591
 ] 

Timothy Potter commented on SOLR-8065:
--

Ideally, we should port this polling code when stopping logic over to the 
SolrCLI so the process will work on Windows too. What's there for windows is 
quite hacky ...

> Improve the start/stop script shutdown logic to avoid forcefully killing Solr
> -
>
> Key: SOLR-8065
> URL: https://issues.apache.org/jira/browse/SOLR-8065
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3
> Environment: Unix/Linux
>Reporter: Ere Maijala
>Priority: Minor
>  Labels: patch
> Attachments: solr_stop_patch.diff
>
>
> Currently the start/stop script only waits 5 seconds before forcefully 
> killing the Solr process. In our experience this is not enough for graceful 
> shutdown on an active SolrCloud node. I propose making the shutdown wait for 
> up to 30 seconds and check the process status every second to avoid 
> unnecessary delay.
> I will attach a patch to accomplish this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8101) Installation script permission issues and other scripts fixes

2015-09-28 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933585#comment-14933585
 ] 

Timothy Potter commented on SOLR-8101:
--

Thanks for the patch - I didn't go through the patch in detail yet, but how 
does this change to using /etc/default support multiple Solr instances per 
server? The whole point of keeping solr.in.sh in /var/solr is to support 
multiple Solr nodes per server, such as: /var/solr1 and /var/solr2 ... each 
will need a separate solr.in.sh

> Installation script permission issues and other scripts fixes
> -
>
> Key: SOLR-8101
> URL: https://issues.apache.org/jira/browse/SOLR-8101
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Sergey Urushkin
>  Labels: patch, security
> Attachments: solr-5.3.1.patch
>
>
> Until [https://issues.apache.org/jira/browse/SOLR-7871] is fixed, I suggest 
> to improve current shell scripts. Provided patch:
>   *  changes {{$SOLR_ENV}} default to {{/etc/default/solr.in.sh}} . This is 
> *security* issue. If {{solr.in.sh}} is placed in directory which is writable 
> by {{$SOLR_USER}}, solr process is able to write to it, and than it will be 
> run by root on start/shutdown.
>   * changes permissions. {{$SOLR_USER}} should only be able to write to 
> {{$SOLR_VAR_DIR}} {{$SOLR_INSTALL_DIR/server/solr-webapp}} 
> {{$SOLR_INSTALL_DIR/server/logs}} . {{solr-webapp}} directory might be 
> inspected more. These directories should not be readable by other users as 
> they may contain personal information.
>   * sets {{$SOLR_USER}} home directory to {{$SOLR_VAR_DIR}} . As I can see 
> there is no need in {{/home/solr}} directory.
>   * adds quotes to unquoted variables
>   * adds leading zero to chmod commands
>   * removes group from chown commands (uses ":")
> Tested on ubuntu 14.04 amd64, but changes are pretty system-independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8085) Fix a variety of issues that can result in replicas getting out of sync.

2015-09-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8085:
--
Attachment: SOLR-8085.patch

NM - the patch was missing a change. Here is a new patch that combines with 
Yonik's.

> Fix a variety of issues that can result in replicas getting out of sync.
> 
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8085.patch, SOLR-8085.patch, SOLR-8085.patch, 
> fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) Fix a variety of issues that can result in replicas getting out of sync.

2015-09-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933521#comment-14933521
 ] 

Mark Miller commented on SOLR-8085:
---

bq. Here is a patch I just ported to trunk that starts buffering before peer 
sync and before we publish as RECOVERING.

It looks like this may pose a problem for shard splitting.

> Fix a variety of issues that can result in replicas getting out of sync.
> 
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8085.patch, SOLR-8085.patch, fail.150922_125320, 
> fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8094) HdfsUpdateLog should not replay buffered documents as a replacement to dropping them.

2015-09-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8094:
--
Comment: was deleted

(was: It looks like this may pose a problem for shard splitting.)

> HdfsUpdateLog should not replay buffered documents as a replacement to 
> dropping them.
> -
>
> Key: SOLR-8094
> URL: https://issues.apache.org/jira/browse/SOLR-8094
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8094.patch
>
>
> This can cause replicas to get out of sync because replication may have 
> failed, but then if you replay you can setup peer sync to pass when it should 
> not.
> After talking to Yonik a bit, I tried doing everything the same as on local 
> fs except skipping the truncate - because of error handling we already have, 
> this appears to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8094) HdfsUpdateLog should not replay buffered documents as a replacement to dropping them.

2015-09-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933520#comment-14933520
 ] 

Mark Miller commented on SOLR-8094:
---

It looks like this may pose a problem for shard splitting.

> HdfsUpdateLog should not replay buffered documents as a replacement to 
> dropping them.
> -
>
> Key: SOLR-8094
> URL: https://issues.apache.org/jira/browse/SOLR-8094
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8094.patch
>
>
> This can cause replicas to get out of sync because replication may have 
> failed, but then if you replay you can setup peer sync to pass when it should 
> not.
> After talking to Yonik a bit, I tried doing everything the same as on local 
> fs except skipping the truncate - because of error handling we already have, 
> this appears to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8085) Fix a variety of issues that can result in replicas getting out of sync.

2015-09-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8085:
--
 Assignee: Mark Miller
 Priority: Critical  (was: Major)
Fix Version/s: 5.4
   Trunk

> Fix a variety of issues that can result in replicas getting out of sync.
> 
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8085.patch, SOLR-8085.patch, fail.150922_125320, 
> fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8085) Fix a variety of issues that can result in replicas getting out of sync.

2015-09-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8085:
--
Summary: Fix a variety of issues that can result in replicas getting out of 
sync.  (was: ChaosMonkey Safe Leader Test fail with shard inconsistency.)

> Fix a variety of issues that can result in replicas getting out of sync.
> 
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8085.patch, SOLR-8085.patch, fail.150922_125320, 
> fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6818) Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr

2015-09-28 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933478#comment-14933478
 ] 

Robert Muir commented on LUCENE-6818:
-

It is not a bug, it is just always how the index-time boost in lucene has 
worked. Boosting a document at index-time is just a way for a user to make it 
artificially longer or shorter.

I don't think we should change this, it makes it much easier for people to 
experiment since all of our scoring models do this the same way. It means you 
do not have to reindex to change the Similarity, for example.

Its easy to understand this as "at search time, the similarity sees the 
"normalized" document length". All I am saying is, these scoring models just 
have to make sure they don't do something totally nuts (like return negative, 
Infinity, or NaN scores) if the user index-time boosts with extreme values: 
extreme values that might not make sense relative to e.g. the collection-level 
statistics for the field. So in my opinion all that is needed, is to add a 
`testCrazyBoosts` that looks a lot like `testCrazySpans`, and just asserts 
those things, ideally across all 256 possible norm values.

> Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr
> --
>
> Key: LUCENE-6818
> URL: https://issues.apache.org/jira/browse/LUCENE-6818
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/query/scoring
>Affects Versions: 5.3
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: similarity
> Fix For: Trunk
>
> Attachments: LUCENE-6818.patch, LUCENE-6818.patch, LUCENE-6818.patch
>
>
> As explained in the 
> [write-up|http://lucidworks.com/blog/flexible-ranking-in-lucene-4], many 
> state-of-the-art ranking model implementations are added to Apache Lucene. 
> This issue aims to include DFI model, which is the non-parametric counterpart 
> of the Divergence from Randomness (DFR) framework.
> DFI is both parameter-free and non-parametric:
> * parameter-free: it does not require any parameter tuning or training.
>  * non-parametric: it does not make any assumptions about word frequency 
> distributions on document collections.
> It is highly recommended *not* to remove stopwords (very common terms: the, 
> of, and, to, a, in, for, is, on, that, etc) with this similarity.
> For more information see: [A nonparametric term weighting method for 
> information retrieval based on measuring the divergence from 
> independence|http://dx.doi.org/10.1007/s10791-013-9225-4]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8083:
-
Priority: Minor  (was: Major)

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-8083.
--
   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2015-09-28 Thread Sean Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933446#comment-14933446
 ] 

Sean Xie commented on SOLR-7883:


to re-produce the issue:

curl 
"http://localhost:8983/solr/techproducts/mlt?q=id%3aSP2514N&mlt.interestingTerms=details&mlt.mintf=1&wt=json&mlt.match.include=true&rows=20&start=0&fl=score,id,name,manu_id_s,cat,features,popularity,instock&mlt.fl=cat,feature&facet=true&facet.field=cat";

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt&q=id:item1&mlt.fl=content}}
> This doesn't: {{?qt=mlt&q=id:item1&mlt.fl=content&facet=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8103) make runtime lib available for schema components

2015-09-28 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8103:


 Summary: make runtime lib available for schema components
 Key: SOLR-8103
 URL: https://issues.apache.org/jira/browse/SOLR-8103
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul


Right now ,it is only possible to load solrconfig plugins only from the runtime 
library SOLR-7073 . We need to extend the same to schema plugins such as 
analyzers/tokenizers etc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8102) solr fails connecting to zookeeper on osx jdk 1.8 ipv6

2015-09-28 Thread Matteo Grolla (JIRA)
Matteo Grolla created SOLR-8102:
---

 Summary: solr fails connecting to zookeeper on osx jdk 1.8 ipv6
 Key: SOLR-8102
 URL: https://issues.apache.org/jira/browse/SOLR-8102
 Project: Solr
  Issue Type: Bug
Reporter: Matteo Grolla


on my macbook pro I see solr trying connecting to zookeeper, both on localhost, 
using an ipv6 address.
This fails because of timeout so my solution has been to add
-Djava.net.preferIPv4Stack=true
to java options and everything seems ok now

hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6590) Explore different ways to apply boosts

2015-09-28 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933375#comment-14933375
 ] 

Terry Smith edited comment on LUCENE-6590 at 9/28/15 2:38 PM:
--

Cheers Adrien. Sorry for the spammy replies before -- I wasn't expecting to see 
more than one discrepancy!

While you are looking at the Query.toString() behavior with respect to 
boosting, how would you feel about adding MatchAllDocsQuery.class to 
BoostQuery.NO_PARENS_REQUIRED_QUERIES so it's toString() doesn't change across 
releases?

{noformat}
Query q = new MatchAllDocsQuery();
q.setBoost(0);
q.toString() -> *:*^0.0

new BoostQuery(new MatchAllDocsQuery(), 0).toString() -> (*:*)^0.0
{noformat}


was (Author: shebiki):
Cheers Adrien. Sorry for the spammy replies before -- I wasn't expecting to see 
more than one discrepancy!

While you are looking at the Query.toString() behavior with respect to 
boosting, how would you feel about adding MatchAllDocsQuery.class to 
BoostQuery.NO_PARENS_REQUIRED_QUERIES so it's toString() doesn't change across 
releases?

Query q = new MatchAllDocsQuery();
q.setBoost(0);
q.toString() -> *:*^0.0

new BoostQuery(new MatchAllDocsQuery(), 0).toString() -> (*:*)^0.0


> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2015-09-28 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933375#comment-14933375
 ] 

Terry Smith commented on LUCENE-6590:
-

Cheers Adrien. Sorry for the spammy replies before -- I wasn't expecting to see 
more than one discrepancy!

While you are looking at the Query.toString() behavior with respect to 
boosting, how would you feel about adding MatchAllDocsQuery.class to 
BoostQuery.NO_PARENS_REQUIRED_QUERIES so it's toString() doesn't change across 
releases?

Query q = new MatchAllDocsQuery();
q.setBoost(0);
q.toString() -> *:*^0.0

new BoostQuery(new MatchAllDocsQuery(), 0).toString() -> (*:*)^0.0


> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Writing custom Tokenizer

2015-09-28 Thread Siddhartha Singh Sandhu
Resolved. Used \u0023 instead of #.

On Mon, Sep 28, 2015 at 10:20 AM, Siddhartha Singh Sandhu <
sandhus...@gmail.com> wrote:

> Hi Ahmet,
>
> Worked partly. I might be doing something wrong:
>
> *My wdfftypes.txt is:*
>
> `
> # A customized type mapping for WordDelimiterFilterFactory
> # the allowable types are: LOWER, UPPER, ALPHA, DIGIT, ALPHANUM,
> SUBWORD_DELIM
> #
> # the default for any character without a mapping is always computed from
> # Unicode character properties
>
> # Map the $, %, '.', and ',' characters to DIGIT
> # This might be useful for financial data.
>
> # => ALPHA
> @ => ALPHA
> `
>
> Problem is* #* is used for comments in this file. And the @ sign works
> perfectly when I analyze it but the # does not display the same behavior:
>
> [image: Inline image 1]
>
> My schema.xml has the following field corresponding to this analysis:
> `
>positionIncrementGap="100">
> 
> 
>  generateWordParts="0" generateNumberParts="0" catenateWords="0"
> catenateNumbers="0" catenateAll="0" preserveOriginal="1"
> types="wdfftypes.txt"  />
> 
> 
> 
>   
>   
> 
>  generateWordParts="0" generateNumberParts="0" catenateWords="0"
> catenateNumbers="0" catenateAll="0" preserveOriginal="1"
> types="wdfftypes.txt"  />
> 
> 
> 
>   
>   
> `
>
> Regards,
>
> Sid.
>
> On Sun, Sep 27, 2015 at 9:23 PM, Ahmet Arslan 
> wrote:
>
>> Hi Sid,
>>
>>
>> One way is to use WhiteSpaceTokenizer and WordDelimeterFilter.
>>
>>
>> In some cases you might want to adjust how WordDelimiterFilter splits on
>> a per-character basis. To do this, you can supply a configuration file with
>> the "types" attribute that specifies custom character categories. An
>> example file is in subversion here. This is especially useful to add
>> "hashtag or currency" searches.
>>
>> Please see:
>>
>> https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.WordDelimiterFilterFactory
>> https://issues.apache.org/jira/browse/SOLR-2059
>>
>> @ => ALPHA
>> # => ALPHA
>>
>> P.S. Maintaining a custom tonizer will be a burden. It is done with
>> *.jflex files blended with java files.
>> Please see ClassicTokenizerImpl.jflex in the source tree for an example.
>>
>> Ahmet
>>
>>
>>
>>
>> On Monday, September 28, 2015 1:58 AM, Siddhartha Singh Sandhu <
>> sandhus...@gmail.com> wrote:
>>
>>
>>
>> Hi Ahmet,
>>
>> I want primarily 3 things.
>>
>> 1. To include # and @ as part of the string which is tokenized by the
>> standard tokenizer which generally strips it off.
>> 2. When a string is tokenized,I just want to keep tokens which are #tags
>> and @mentions.
>> 3. I understand there is PatternTokenizer but I wanted to leverage
>> twitter-text github to because I trust there regex more then my own.
>>
>> Not only the above three, but I also need to control the special
>> characters that are striped from my string while tokenizing.
>>
>> Please let me know of your views.
>>
>> Regards,
>>
>> Sid.
>>
>>
>> On Sun, Sep 27, 2015 at 5:21 PM, Ahmet Arslan 
>> wrote:
>>
>> Hi Sid,
>> >
>> >Can you provide us more details?
>> >
>> >Usually you can get away without a custom tokenizer, there may be other
>> tricks to achieve your requirements.
>> >
>> >Ahmet
>> >
>> >
>> >
>> >
>> >On Sunday, September 27, 2015 11:29 PM, Siddhartha Singh Sandhu <
>> sandhus...@gmail.com> wrote:
>> >
>> >
>> >
>> >Hi Everyone,
>> >
>> >I wanted to write a custom tokenizer and wanted a generic direction and
>> some guidance on how I should go about achieving this goal.
>> >
>> >Your input will be much appreciated.
>> >
>> >Regards,
>> >
>> >Sid.
>> >
>> >-
>> >To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> >For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2015-09-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933362#comment-14933362
 ] 

Adrien Grand commented on LUCENE-6590:
--

Thanks Terry, I'll fix.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6818) Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr

2015-09-28 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-6818:
-
Attachment: LUCENE-6818.patch

* renamed failing test to {{TestSimilarityBase#testIndexTimeBoost}}
* randomized the test method a bit

> Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr
> --
>
> Key: LUCENE-6818
> URL: https://issues.apache.org/jira/browse/LUCENE-6818
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/query/scoring
>Affects Versions: 5.3
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: similarity
> Fix For: Trunk
>
> Attachments: LUCENE-6818.patch, LUCENE-6818.patch, LUCENE-6818.patch
>
>
> As explained in the 
> [write-up|http://lucidworks.com/blog/flexible-ranking-in-lucene-4], many 
> state-of-the-art ranking model implementations are added to Apache Lucene. 
> This issue aims to include DFI model, which is the non-parametric counterpart 
> of the Divergence from Randomness (DFR) framework.
> DFI is both parameter-free and non-parametric:
> * parameter-free: it does not require any parameter tuning or training.
>  * non-parametric: it does not make any assumptions about word frequency 
> distributions on document collections.
> It is highly recommended *not* to remove stopwords (very common terms: the, 
> of, and, to, a, in, for, is, on, that, etc) with this similarity.
> For more information see: [A nonparametric term weighting method for 
> information retrieval based on measuring the divergence from 
> independence|http://dx.doi.org/10.1007/s10791-013-9225-4]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Writing custom Tokenizer

2015-09-28 Thread Siddhartha Singh Sandhu
Hi Ahmet,

Worked partly. I might be doing something wrong:

*My wdfftypes.txt is:*

`
# A customized type mapping for WordDelimiterFilterFactory
# the allowable types are: LOWER, UPPER, ALPHA, DIGIT, ALPHANUM,
SUBWORD_DELIM
#
# the default for any character without a mapping is always computed from
# Unicode character properties

# Map the $, %, '.', and ',' characters to DIGIT
# This might be useful for financial data.

# => ALPHA
@ => ALPHA
`

Problem is* #* is used for comments in this file. And the @ sign works
perfectly when I analyze it but the # does not display the same behavior:

[image: Inline image 1]

My schema.xml has the following field corresponding to this analysis:
`
  






  
  





  
  
`

Regards,

Sid.

On Sun, Sep 27, 2015 at 9:23 PM, Ahmet Arslan 
wrote:

> Hi Sid,
>
>
> One way is to use WhiteSpaceTokenizer and WordDelimeterFilter.
>
>
> In some cases you might want to adjust how WordDelimiterFilter splits on a
> per-character basis. To do this, you can supply a configuration file with
> the "types" attribute that specifies custom character categories. An
> example file is in subversion here. This is especially useful to add
> "hashtag or currency" searches.
>
> Please see:
>
> https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.WordDelimiterFilterFactory
> https://issues.apache.org/jira/browse/SOLR-2059
>
> @ => ALPHA
> # => ALPHA
>
> P.S. Maintaining a custom tonizer will be a burden. It is done with
> *.jflex files blended with java files.
> Please see ClassicTokenizerImpl.jflex in the source tree for an example.
>
> Ahmet
>
>
>
>
> On Monday, September 28, 2015 1:58 AM, Siddhartha Singh Sandhu <
> sandhus...@gmail.com> wrote:
>
>
>
> Hi Ahmet,
>
> I want primarily 3 things.
>
> 1. To include # and @ as part of the string which is tokenized by the
> standard tokenizer which generally strips it off.
> 2. When a string is tokenized,I just want to keep tokens which are #tags
> and @mentions.
> 3. I understand there is PatternTokenizer but I wanted to leverage
> twitter-text github to because I trust there regex more then my own.
>
> Not only the above three, but I also need to control the special
> characters that are striped from my string while tokenizing.
>
> Please let me know of your views.
>
> Regards,
>
> Sid.
>
>
> On Sun, Sep 27, 2015 at 5:21 PM, Ahmet Arslan 
> wrote:
>
> Hi Sid,
> >
> >Can you provide us more details?
> >
> >Usually you can get away without a custom tokenizer, there may be other
> tricks to achieve your requirements.
> >
> >Ahmet
> >
> >
> >
> >
> >On Sunday, September 27, 2015 11:29 PM, Siddhartha Singh Sandhu <
> sandhus...@gmail.com> wrote:
> >
> >
> >
> >Hi Everyone,
> >
> >I wanted to write a custom tokenizer and wanted a generic direction and
> some guidance on how I should go about achieving this goal.
> >
> >Your input will be much appreciated.
> >
> >Regards,
> >
> >Sid.
> >
> >-
> >To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2015-09-28 Thread Sean Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933343#comment-14933343
 ] 

Sean Xie commented on SOLR-7883:


actually don't see how to configure FacetComponent in handler section: 


> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt&q=id:item1&mlt.fl=content}}
> This doesn't: {{?qt=mlt&q=id:item1&mlt.fl=content&facet=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933266#comment-14933266
 ] 

ASF subversion and git services commented on SOLR-8083:
---

Commit 1705683 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1705683 ]

SOLR-8083: convert the ZookeeperInfoServlet to a request handler at 
/admin/zookeeper

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933218#comment-14933218
 ] 

ASF subversion and git services commented on SOLR-8083:
---

Commit 1705662 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1705662 ]

SOLR-8083: convert the ZookeeperInfoServlet to a request handler at 
/admin/zookeeper

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-09-28 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-6817:
-
Attachment: LUCENE-6817.patch

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6817.patch, LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 807 - Still Failing

2015-09-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/807/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=12379, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=12379, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43418/_j: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([28E9BB72335AD47A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)


FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6508, name=collection2, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6508, name=collection2, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43430: Could not find collection : 
awholynewstresscollection_collection2_0
at __randomizedtesting.SeedInfo.seed([28E9BB72335AD47A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 10194 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest_28E9BB72335AD47A-001/init-core-data-001
   [junit4]   2> 439776 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[28E9BB72335AD47A]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 439776 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[28E9BB72335AD47A]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 439848 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[28E9BB72335AD47A]-worker) [
] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 439855 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[28E9BB72335AD47A]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 439858 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[28E9BB72335AD47A]-worker) [
] o.m.log jetty-6.1.26
   [junit4]   2> 439871 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[28E9BB72335AD47A]-worker) [
] o.m.log Extr

[jira] [Updated] (LUCENE-6817) ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in toString()

2015-09-28 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-6817:
-
Attachment: LUCENE-6817.patch

> ComplexPhraseQueryParser.ComplexPhraseQuery does not display slop in 
> toString()
> ---
>
> Key: LUCENE-6817
> URL: https://issues.apache.org/jira/browse/LUCENE-6817
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Trivial
> Fix For: Trunk
>
> Attachments: LUCENE-6817.patch
>
>
> This one is quite simple (I think) -- ComplexPhraseQuery doesn't display the 
> slop factor which, when the result of parsing is dumped to logs, for example, 
> can be confusing.
> I'm heading for a weekend out of office in a few hours... so in the spirit of 
> not committing and running away ( :) ), if anybody wishes to tackle this, go 
> ahead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8089) Support query parsers being able to set enablePositionIncrements

2015-09-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933173#comment-14933173
 ] 

Varun Thacker commented on SOLR-8089:
-

{{QueryParserBase#setEnablePositionIncrements}} will ignore holes in the query 
token stream . We need some sort of support at indexing time as well here for 
it to match.

> Support query parsers being able to set enablePositionIncrements
> 
>
> Key: SOLR-8089
> URL: https://issues.apache.org/jira/browse/SOLR-8089
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-8089.patch
>
>
> In 5.0 LUCENE-4963 disallowed setting enablePositionIncrements=false on any 
> token filter. The idea being no filter should change the token stream in a 
> way that could mess up preceding or following analysis components
> So if a user wants to be able to have PhraseQueries that match across 
> stopwords they cannot unless the parser is configured to not take position 
> increments into account when generating phrase queries .
> This is documented in the "Token Position Increments" section here : 
> https://lucene.apache.org/core/5_3_0/core/org/apache/lucene/analysis/package-summary.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8101) Installation script permission issues and other scripts fixes

2015-09-28 Thread Sergey Urushkin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Urushkin updated SOLR-8101:
--
Attachment: solr-5.3.1.patch

Patch attached.

> Installation script permission issues and other scripts fixes
> -
>
> Key: SOLR-8101
> URL: https://issues.apache.org/jira/browse/SOLR-8101
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Sergey Urushkin
>  Labels: patch, security
> Attachments: solr-5.3.1.patch
>
>
> Until [https://issues.apache.org/jira/browse/SOLR-7871] is fixed, I suggest 
> to improve current shell scripts. Provided patch:
>   *  changes {{$SOLR_ENV}} default to {{/etc/default/solr.in.sh}} . This is 
> *security* issue. If {{solr.in.sh}} is placed in directory which is writable 
> by {{$SOLR_USER}}, solr process is able to write to it, and than it will be 
> run by root on start/shutdown.
>   * changes permissions. {{$SOLR_USER}} should only be able to write to 
> {{$SOLR_VAR_DIR}} {{$SOLR_INSTALL_DIR/server/solr-webapp}} 
> {{$SOLR_INSTALL_DIR/server/logs}} . {{solr-webapp}} directory might be 
> inspected more. These directories should not be readable by other users as 
> they may contain personal information.
>   * sets {{$SOLR_USER}} home directory to {{$SOLR_VAR_DIR}} . As I can see 
> there is no need in {{/home/solr}} directory.
>   * adds quotes to unquoted variables
>   * adds leading zero to chmod commands
>   * removes group from chown commands (uses ":")
> Tested on ubuntu 14.04 amd64, but changes are pretty system-independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8101) Installation script permission issues and other scripts fixes

2015-09-28 Thread Sergey Urushkin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Urushkin updated SOLR-8101:
--
Description: 
Until [https://issues.apache.org/jira/browse/SOLR-7871] is fixed, I suggest to 
improve current shell scripts. Provided patch:
  *  changes {{$SOLR_ENV}} default to {{/etc/default/solr.in.sh}} . This is 
*security* issue. If {{solr.in.sh}} is placed in directory which is writable by 
{{$SOLR_USER}}, solr process is able to write to it, and than it will be run by 
root on start/shutdown.
  * changes permissions. {{$SOLR_USER}} should only be able to write to 
{{$SOLR_VAR_DIR}} {{$SOLR_INSTALL_DIR/server/solr-webapp}} 
{{$SOLR_INSTALL_DIR/server/logs}} . {{solr-webapp}} directory might be 
inspected more. These directories should not be readable by other users as they 
may contain personal information.
  * sets {{$SOLR_USER}} home directory to {{$SOLR_VAR_DIR}} . As I can see 
there is no need in {{/home/solr}} directory.
  * adds quotes to unquoted variables
  * adds leading zero to chmod commands
  * removes group from chown commands (uses ":")

Tested on ubuntu 14.04 amd64, but changes are pretty system-independent.

  was:
Until [https://issues.apache.org/jira/browse/SOLR-7871] is fixed, I suggest to 
improve current shell scripts. Provided patch:
  *  changes {{$SOLR_ENV}} default to {{/etc/default/solr.in.sh}} . This is 
*security* issue. If {{solr.in.sh}} is placed in directory which is writable by 
{{$SOLR_USER}}, solr process is able to write to it, and than it will be run by 
root on start/shutdown.
  * changes permissions. {{$SOLR_USER}} should only be able to write to 
{{$SOLR_VAR_DIR , $SOLR_INSTALL_DIR/server/solr-webapp , 
$SOLR_INSTALL_DIR/server/logs}} . {{solr-webapp}} directory might be inspected 
more. These directories should not be readable by other users as they may 
contain personal information.
  * sets {{$SOLR_USER}} home directory to {{$SOLR_VAR_DIR}} . As I can see 
there is no need in {{/home/solr}} directory.
  * adds quotes to unquoted variables
  * adds leading zero to chmod commands
  * removes group from chown commands (uses ":")

Tested on ubuntu 14.04 amd64, but changes are pretty system-independent.


> Installation script permission issues and other scripts fixes
> -
>
> Key: SOLR-8101
> URL: https://issues.apache.org/jira/browse/SOLR-8101
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Sergey Urushkin
>  Labels: patch, security
>
> Until [https://issues.apache.org/jira/browse/SOLR-7871] is fixed, I suggest 
> to improve current shell scripts. Provided patch:
>   *  changes {{$SOLR_ENV}} default to {{/etc/default/solr.in.sh}} . This is 
> *security* issue. If {{solr.in.sh}} is placed in directory which is writable 
> by {{$SOLR_USER}}, solr process is able to write to it, and than it will be 
> run by root on start/shutdown.
>   * changes permissions. {{$SOLR_USER}} should only be able to write to 
> {{$SOLR_VAR_DIR}} {{$SOLR_INSTALL_DIR/server/solr-webapp}} 
> {{$SOLR_INSTALL_DIR/server/logs}} . {{solr-webapp}} directory might be 
> inspected more. These directories should not be readable by other users as 
> they may contain personal information.
>   * sets {{$SOLR_USER}} home directory to {{$SOLR_VAR_DIR}} . As I can see 
> there is no need in {{/home/solr}} directory.
>   * adds quotes to unquoted variables
>   * adds leading zero to chmod commands
>   * removes group from chown commands (uses ":")
> Tested on ubuntu 14.04 amd64, but changes are pretty system-independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8101) Installation script permission issues and other scripts fixes

2015-09-28 Thread Sergey Urushkin (JIRA)
Sergey Urushkin created SOLR-8101:
-

 Summary: Installation script permission issues and other scripts 
fixes
 Key: SOLR-8101
 URL: https://issues.apache.org/jira/browse/SOLR-8101
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 5.3.1
Reporter: Sergey Urushkin


Until [https://issues.apache.org/jira/browse/SOLR-7871] is fixed, I suggest to 
improve current shell scripts. Provided patch:
  *  changes {{$SOLR_ENV}} default to {{/etc/default/solr.in.sh}} . This is 
*security* issue. If {{solr.in.sh}} is placed in directory which is writable by 
{{$SOLR_USER}}, solr process is able to write to it, and than it will be run by 
root on start/shutdown.
  * changes permissions. {{$SOLR_USER}} should only be able to write to 
{{$SOLR_VAR_DIR , $SOLR_INSTALL_DIR/server/solr-webapp , 
$SOLR_INSTALL_DIR/server/logs}} . {{solr-webapp}} directory might be inspected 
more. These directories should not be readable by other users as they may 
contain personal information.
  * sets {{$SOLR_USER}} home directory to {{$SOLR_VAR_DIR}} . As I can see 
there is no need in {{/home/solr}} directory.
  * adds quotes to unquoted variables
  * adds leading zero to chmod commands
  * removes group from chown commands (uses ":")

Tested on ubuntu 14.04 amd64, but changes are pretty system-independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8100) Standardize the API calls required fro admin UI

2015-09-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933151#comment-14933151
 ] 

Noble Paul commented on SOLR-8100:
--

Yes Zookeeper API

> Standardize the API calls required fro admin UI
> ---
>
> Key: SOLR-8100
> URL: https://issues.apache.org/jira/browse/SOLR-8100
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>
> The API required by the Admin UI is neither documented nor tested. It is 
> largely UI centric. We need to standardize the calls and write tests for the 
> API.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8100) Standardize the API calls required fro admin UI

2015-09-28 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933146#comment-14933146
 ] 

Upayavira commented on SOLR-8100:
-

Presumably you are referring to Zookeeper APIs? Much of the UI consumes 
standard Solr APIs - it is particularly in the ZK area that it diverges and it 
seems the UI and the API were created in consort.

> Standardize the API calls required fro admin UI
> ---
>
> Key: SOLR-8100
> URL: https://issues.apache.org/jira/browse/SOLR-8100
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>
> The API required by the Admin UI is neither documented nor tested. It is 
> largely UI centric. We need to standardize the calls and write tests for the 
> API.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933113#comment-14933113
 ] 

Shalin Shekhar Mangar commented on SOLR-8083:
-

{quote}
* please tell me the list of live nodes
* please tell me the state of this collection
{quote}

The clusterstatus API tells you those things already.

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933105#comment-14933105
 ] 

Noble Paul commented on SOLR-8083:
--

[~upayavira] I've opened SOLR-8100

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8100) Standardize the API calls required fro admin UI

2015-09-28 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8100:


 Summary: Standardize the API calls required fro admin UI
 Key: SOLR-8100
 URL: https://issues.apache.org/jira/browse/SOLR-8100
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul


The API required by the Admin UI is neither documented nor tested. It is 
largely UI centric. We need to standardize the calls and write tests for the 
API.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933102#comment-14933102
 ] 

Noble Paul commented on SOLR-8083:
--

bq. work being done on the wrong side of the API

I shall open a ticket for the same. You please dcument the input params and  a 
sample output you would need and the params and I shall add the API

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933098#comment-14933098
 ] 

Upayavira commented on SOLR-8083:
-

If in this ticket you are simply proposing to replace /zookeeper with 
/admin/zookeeper, implemented as a RequestHandler but supporting the same API, 
then I am fully in support of that, and will do the UI side work.

I agree wholeheartedly with the API not being UI centric - that's what I was 
referring to above - work being done on the wrong side of the API, and would 
love to see that all cleaned up - and again, I'll do the UI side work to make 
it happen.

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14928045#comment-14928045
 ] 

Noble Paul commented on SOLR-8083:
--

This change is not for V2 or new UI

This is largely a cleanup process. Solr should not need servlets .

bq.How would you expect the admin UI to implement the tree browser without 
using the /admin/zookeeper API?

The tree browser will need some Zookeeper API. But whatever it is , it should 
be an API which is not UI centric . I see stuff like {{href=zookeeper?xxx}} 
which is clearly something the API should never require. Again, I'm not 
proposing that change in this ticket

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 86 - Failure!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/86/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[replication.properties, index.20150928024902067, index.20150928024859995, 
index.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [replication.properties, index.20150928024902067, 
index.20150928024859995, index.properties] expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([C0CA7FCC70A8E85E:1B617F0A758081ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:818)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:785)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.ra

[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14910205#comment-14910205
 ] 

Upayavira commented on SOLR-8083:
-

I'm in support of that separation.

How would you expect the admin UI to implement the tree browser without using 
the /admin/zookeeper API?

I do see a number of situations where functionality has been placed one side or 
the other based upon what people know how to do, rather than on where it should 
be. It'd be great to fix those.

Regarding /zookeeper, that is sufficiently fundamental a feature that we must 
make such a change work across both the old and new UIs. Using your new API 
(#2) could be done in the new UI only.

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6818) Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr

2015-09-28 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14910174#comment-14910174
 ] 

Ahmet Arslan commented on LUCENE-6818:
--

bq. The typical solution is to do something like adjust expected:
Thanks Robert for the suggestion and explanation. Used the typical solution, 
its working now.

bq. I have not read the paper, but these are things to deal with when 
integrating into lucene.
For your information, if you want to look at, Terrier 4.0 source tree has this 
model in DFIC.java

bq.  index-time boosts work on the norm, by making the document appear shorter 
or longer, so docLen might have a "crazy" value if the user does this.
I was relying {{o.a.l.search.similarities.SimilarityBase}} for this but it 
looks like all of its subclasses (DFR, IB) have this problem. I included 
{{TestSimilarityBase#testNorms}} method in the new patch to demonstrate the 
problem. If I am not missing something obvious this is a bug, no?

> Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr
> --
>
> Key: LUCENE-6818
> URL: https://issues.apache.org/jira/browse/LUCENE-6818
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/query/scoring
>Affects Versions: 5.3
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: similarity
> Fix For: Trunk
>
> Attachments: LUCENE-6818.patch, LUCENE-6818.patch
>
>
> As explained in the 
> [write-up|http://lucidworks.com/blog/flexible-ranking-in-lucene-4], many 
> state-of-the-art ranking model implementations are added to Apache Lucene. 
> This issue aims to include DFI model, which is the non-parametric counterpart 
> of the Divergence from Randomness (DFR) framework.
> DFI is both parameter-free and non-parametric:
> * parameter-free: it does not require any parameter tuning or training.
>  * non-parametric: it does not make any assumptions about word frequency 
> distributions on document collections.
> It is highly recommended *not* to remove stopwords (very common terms: the, 
> of, and, to, a, in, for, is, on, that, etc) with this similarity.
> For more information see: [A nonparametric term weighting method for 
> information retrieval based on measuring the divergence from 
> independence|http://dx.doi.org/10.1007/s10791-013-9225-4]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3556 - Failure

2015-09-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3556/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
ERROR: SolrIndexSearcher opens=8 closes=7

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=8 closes=7
at __randomizedtesting.SeedInfo.seed([7AF83498C73EFD81]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=2332, 
name=qtp1580330417-2332, state=WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient.blockUntilFinished(ConcurrentUpdateSolrClient.java:404)
 at 
org.apache.solr.update.StreamingSolrClients.blockUntilFinished(StreamingSolrClients.java:103)
 at 
org.apache.solr.update.SolrCmdDistributor.blockAndDoRetries(SolrCmdDistributor.java:231)
 at 
org.apache.solr.update.SolrCmdDistributor.finish(SolrCmdDistributor.java:89)
 at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:778)
 at 
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1615)
 at 
org.apache.solr.update.processor.LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:183)
 at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:83)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:151)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2079) 
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:667) 
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460) at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 

[jira] [Updated] (LUCENE-6818) Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr

2015-09-28 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-6818:
-
Attachment: LUCENE-6818.patch

This patch prevents infinity score by using +1 trick. Now 
{{TestSimilarity2#testCrazySpans}} passes.

> Implementing Divergence from Independence (DFI) Term-Weighting for Lucene/Solr
> --
>
> Key: LUCENE-6818
> URL: https://issues.apache.org/jira/browse/LUCENE-6818
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/query/scoring
>Affects Versions: 5.3
>Reporter: Ahmet Arslan
>Assignee: Robert Muir
>Priority: Minor
>  Labels: similarity
> Fix For: Trunk
>
> Attachments: LUCENE-6818.patch, LUCENE-6818.patch
>
>
> As explained in the 
> [write-up|http://lucidworks.com/blog/flexible-ranking-in-lucene-4], many 
> state-of-the-art ranking model implementations are added to Apache Lucene. 
> This issue aims to include DFI model, which is the non-parametric counterpart 
> of the Divergence from Randomness (DFR) framework.
> DFI is both parameter-free and non-parametric:
> * parameter-free: it does not require any parameter tuning or training.
>  * non-parametric: it does not make any assumptions about word frequency 
> distributions on document collections.
> It is highly recommended *not* to remove stopwords (very common terms: the, 
> of, and, to, a, in, for, is, on, that, etc) with this similarity.
> For more information see: [A nonparametric term weighting method for 
> information retrieval based on measuring the divergence from 
> independence|http://dx.doi.org/10.1007/s10791-013-9225-4]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14910160#comment-14910160
 ] 

Noble Paul commented on SOLR-8083:
--

Thanks,

I would like to make the whole thing split into two

 #  get rid of the servlet and make it just a normal request handler. This step 
would just return the same output as it used to. So, really nothing changes 
other than the path at which this is accessed
# Enhance the standard APIs in such a way that the admin UI can be completely 
driven using them. The standard APIs will have proper JUnit tests and a well 
known output. The current output is driven by the UI and not the other way 
around. 

I would say we open a ticket for #2 and track it separately and the admin ui 
should eventually stop depending on the {{/admin/zookeeper}} end point.


> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8083) Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper

2015-09-28 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14910145#comment-14910145
 ] 

Upayavira commented on SOLR-8083:
-

Yes please!!

The request I have is that this abstract out functions that Zookeeper offers, 
so:

 * please tell me the list of live nodes
 * please tell me the state of this collection

We get real craziness when the UI depends upon clusterstate.json, which is 
replaced by state.json, and then (I assume) the ZookeeperServlet has to rebuild 
a clusterstate.json file, pretending that it exists in its file system, to keep 
the UI happy.

For the tree view, we still need the ability to list the contents of nodes, but 
I'd rather we only ever use that for the tree view.

I can look through the UI source, and give you a list of the API endpoints I 
would love to have. I doubt they will be many or hard to implement. Would that 
work?

> Convert the ZookeeperInfoServlet to a handler at /admin/zookeeper 
> --
>
> Key: SOLR-8083
> URL: https://issues.apache.org/jira/browse/SOLR-8083
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8083.patch
>
>
> There is no need to have an extra servlet for this purpose 
> By keeping this outside of Solr we cannot even secure it properly using the 
> security framework and now we have taken up a top level name called zookeeper 
> which cannot be used as a collection name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2015-09-28 Thread Oliver Schrenk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14910143#comment-14910143
 ] 

Oliver Schrenk commented on SOLR-7495:
--

Is there another way of faceting numeric data now?

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json&fl=id&fq=index_type:foobar&group=true&group.field=year_make_model&group.facet=true&facet=true&facet.field=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:15

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 84 - Still Failing!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/84/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.RemoteQueryErrorTest.test

Error Message:
There are still nodes recoverying - waited for 15 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 15 
seconds
at 
__randomizedtesting.SeedInfo.seed([2172154495F25F90:A9262A9E3B0E3268]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:836)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.cloud.RemoteQueryErrorTest.test(RemoteQueryErrorTest.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.ap

[jira] [Commented] (SOLR-7883) MoreLikeThis is incompatible with facets

2015-09-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14910135#comment-14910135
 ] 

Tobias Kässmann commented on SOLR-7883:
---

set facet=true by default and place the FacetComponent at the end of the 
component list

> MoreLikeThis is incompatible with facets
> 
>
> Key: SOLR-7883
> URL: https://issues.apache.org/jira/browse/SOLR-7883
> Project: Solr
>  Issue Type: Bug
>  Components: faceting, MoreLikeThis
>Affects Versions: 5.2.1
> Environment: Arch Linux / OpenJDK 7.u85_2.6.1-1
>Reporter: Thomas Seidl
>
> When using the {{MoreLikeThis}} request handler, it doesn't seem possible to 
> also have facets. This worked in Solr 4, but seems to be broken now.
> Example:
> This works: {{?qt=mlt&q=id:item1&mlt.fl=content}}
> This doesn't: {{?qt=mlt&q=id:item1&mlt.fl=content&facet=true}}
> (Yes, you don't even need to specify any facet fields/ranges/queries. The 
> {{q}} query just has to match an item.)
> While the latter will actually return the same result set as the former, the 
> HTTP status is 500 and the following error included in the response:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.request.SimpleFacets.getHeatmapCounts(SimpleFacets.java:1753)
>   at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:289)
>   at 
> org.apache.solr.handler.MoreLikeThisHandler.handleRequestBody(MoreLikeThisHandler.java:233)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 14040 - Failure!

2015-09-28 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14040/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collMinRf_1x3":{ "router":{"name":"compositeId"}, 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "node_name":"127.0.0.1:57058_", 
"state":"active", "core":"collMinRf_1x3_shard1_replica1",   
  "base_url":"http://127.0.0.1:57058"},   "core_node2":{ 
"node_name":"127.0.0.1:45938_", "state":"active", 
"core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:45938";, "leader":"true"},   
"core_node3":{ "node_name":"127.0.0.1:59885_", 
"state":"active", "core":"collMinRf_1x3_shard1_replica2",   
  "base_url":"http://127.0.0.1:59885", "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "c8n_1x2":{ "router":{"name":"compositeId"},  
   "replicationFactor":"2", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "node_name":"127.0.0.1:54547_", 
"state":"active", "core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:54547";, "leader":"true"},   
"core_node2":{ "node_name":"127.0.0.1:45938_", 
"state":"recovering", "core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:45938", "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collection1":{ 
"router":{"name":"compositeId"}, "replicationFactor":"1", "shards":{
   "shard1":{ "range":"8000-", "state":"active",
 "replicas":{"core_node2":{ "node_name":"127.0.0.1:59885_", 
"state":"active", "core":"collection1", 
"base_url":"http://127.0.0.1:59885";, "leader":"true"}}},   
"shard2":{ "range":"0-7fff", "state":"active", 
"replicas":{   "core_node1":{ 
"node_name":"127.0.0.1:57058_", "state":"active", 
"core":"collection1", "base_url":"http://127.0.0.1:57058";,  
   "leader":"true"},   "core_node3":{ 
"node_name":"127.0.0.1:54547_", "state":"active", 
"core":"collection1", "base_url":"http://127.0.0.1:54547", 
"maxShardsPerNode":"1", "autoCreated":"true", 
"autoAddReplicas":"false"},   "control_collection":{ 
"router":{"name":"compositeId"}, "replicationFactor":"1", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{"core_node1":{ 
"node_name":"127.0.0.1:45938_", "state":"active", 
"core":"collection1", "base_url":"http://127.0.0.1:45938";,  
   "leader":"true", "maxShardsPerNode":"1", "autoCreated":"true",   
  "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collMinRf_1x3":{
"router":{"name":"compositeId"},
"replicationFactor":"3",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"node_name":"127.0.0.1:57058_",
"state":"active",
"core":"collMinRf_1x3_shard1_replica1",
"base_url":"http://127.0.0.1:57058"},
  "core_node2":{
"node_name":"127.0.0.1:45938_",
"state":"active",
"core":"collMinRf_1x3_shard1_replica3",
"base_url":"http://127.0.0.1:45938";,
"leader":"true"},
  "core_node3":{
"node_name":"127.0.0.1:59885_",
"state":"active",
"core":"collMinRf_1x3_shard1_replica2",
"base_url":"http://127.0.0.1:59885",
"maxShardsPerNode":"1",
"autoAddReplicas":"false"},
  "c8n_1x2":{
"router":{"name":"compositeId"},
"replicationFactor":"2",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"node_name":"127.0.0.1:54547_",
"state":"active",
"core":"c8n_1x2_shard1_replica1",
"base_url":"http://127.0.0.1:54547";,
"leader":"true"},
  "core_node2":{
"node_name":"127.0.0.1:45938_",
"state":"recovering",
"core":"c8n_1x2_shard1_replica2",
"base_url":"http://