[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 204 - Still Failing

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/204/

No tests ran.

Build Log:
[...truncated 24220 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2186 links (1743 relative) to 2903 anchors in 228 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml


[JENKINS] Lucene-Solr-repro - Build # 533 - Still Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/533/

[...truncated 33 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2498/consoleText

[repro] Revision: 46037dc67494a746857048399c02a6cf6f7a07c1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=1A9A3AB53E516E1E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=bg 
-Dtests.timezone=US/Indiana-Starke -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5ef43e900f8abeeb56cb9bba8ca1d050ec956f21
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 46037dc67494a746857048399c02a6cf6f7a07c1

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=1A9A3AB53E516E1E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=bg -Dtests.timezone=US/Indiana-Starke -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 6716 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=1A9A3AB53E516E1E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=bg -Dtests.timezone=US/Indiana-Starke -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2405 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 5ef43e900f8abeeb56cb9bba8ca1d050ec956f21

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-04-19 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445350#comment-16445350
 ] 

Ishan Chattopadhyaya commented on SOLR-4793:


I think the long term solution could be to implement something like a 
BlobStoreResourceLoader, and a configset (as a whole or in parts) could be 
loaded from ZK or blob store.

> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Priority: Major
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2499 - Still Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2499/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([2313280D111AC527]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303)
at sun.reflect.GeneratedMethodAccessor55.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.ZkControllerTest: 
1) 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4574 - Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4574/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1040)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:657)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1306)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:944) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:526)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:369) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:420) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$12(ReplicationHandler.java:1159)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:352)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:730)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:955)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:864)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1051)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:647)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:192)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:499)  
at org.apache.solr.core.SolrCore.(SolrCore.java:949)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:864)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1051)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:647)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:192)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:526)  
at 

[jira] [Updated] (SOLR-12250) NegativeArraySizeException on TransactionLog if previous document more than 1.9GB

2018-04-19 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12250:

Attachment: SOLR-12250.patch

> NegativeArraySizeException on TransactionLog if previous document more than 
> 1.9GB
> -
>
> Key: SOLR-12250
> URL: https://issues.apache.org/jira/browse/SOLR-12250
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12250.patch
>
>
> In TransactionLog, we have
> {code:java}
> bufSize = Math.min(1024*1024, lastAddSize+(lastAddSize>>3)+256);
> MemOutputStream out = new MemOutputStream(new byte[bufSize]);
> {code}
> Note that bufSize will be a negative number if lastAddSize > 1908874127 ( 
> which is around 1.9GB).
> Although this seems to relate to user's error because of sending such a big 
> document. But the exception is thrown for the update after the big one. 
> Therefore it is better to fix the problem and solving how we can prohibit 
> users from sending very big documents in other issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-04-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445311#comment-16445311
 ] 

Steve Rowe commented on SOLR-4793:
--

FYI I considered adding info on configuring ZooKeeper clients other than those 
invoked by {{bin/solr}}, e.g. ZK's {{zkCli.sh}} and Solr's cloud script 
{{zkcli.sh}}, but neither of those are covered elsewhere in the ref guide, and 
I *think* {{bin/solr zk}} commands cover most users needs, so I didn't end up 
including info on configuring those clients.  (ZK's {{zkCli.sh}} reads 
{{zookeeper-env.sh}}, so a user employing that approach for configuring ZK 
nodes will get that client configured for free.)

> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Priority: Major
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11-ea+5) - Build # 25 - Still Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/25/
Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SSLMigrationTest.test

Error Message:
Replica didn't have the proper urlScheme in the ClusterState

Stack Trace:
java.lang.AssertionError: Replica didn't have the proper urlScheme in the 
ClusterState
at 
__randomizedtesting.SeedInfo.seed([553375A831A3D147:DD674A729F5FBCBF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SSLMigrationTest.assertReplicaInformation(SSLMigrationTest.java:104)
at 
org.apache.solr.cloud.SSLMigrationTest.testMigrateSSL(SSLMigrationTest.java:97)
at org.apache.solr.cloud.SSLMigrationTest.test(SSLMigrationTest.java:61)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Comment Edited] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-04-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445295#comment-16445295
 ] 

Steve Rowe edited comment on SOLR-4793 at 4/20/18 4:53 AM:
---

I've attached a patch that adds {{jute.maxbuffer}} documentation to the ref 
guide page on setting up an external ZooKeeper cluster that [~ctargett] 
improved over on SOLR-12163.

I've included links to the new section:
 # from the large model section on the LTR page, as an alternative to 
SOLR-11250's {{DefaultWrapperModel}}; inspired by the patch on SOLR-11049 - cc 
[~cpoerschke];
 # from the OpenNLP NER mention on the URP page (LUCENE-2899); and
 # from the OpenNLP NER URP javadocs.

Feedback is welcome.


was (Author: steve_rowe):
I've attached a patch that adds {{jute.maxbuffer}} documentation to the ref 
guide page on setting up an external ZooKeeper cluster that [~ctargett] 
improved over on SOLR-12163.

I've included a links to the new section:
 # from the large model section on the LTR page, as an alternative to 
SOLR-11250's {{DefaultWrapperModel}}; inspired by the patch on SOLR-11049 - cc 
[~cpoerschke];
 # from the OpenNLP NER mention on the URP page (LUCENE-2899); and
 # from the OpenNLP NER URP javadocs.

Feedback is welcome.

> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Priority: Major
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-04-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445295#comment-16445295
 ] 

Steve Rowe edited comment on SOLR-4793 at 4/20/18 4:50 AM:
---

I've attached a patch that adds {{jute.maxbuffer}} documentation to the ref 
guide page on setting up an external ZooKeeper cluster that [~ctargett] 
improved over on SOLR-12163.

I've included a links to the new section:
 # from the large model section on the LTR page, as an alternative to 
SOLR-11250's {{DefaultWrapperModel}}; inspired by the patch on SOLR-11049 - cc 
[~cpoerschke];
 # from the OpenNLP NER mention on the URP page (LUCENE-2899); and
 # from the OpenNLP NER URP javadocs.

Feedback is welcome.


was (Author: steve_rowe):
I've attached a patch that adds {{jute.maxbuffer}} documentation to the ref 
guide page on setting up an external ZooKeeper cluster that [~ctargett] 
improved over on SOLR-12163.

I've included a links to the new section:

# from the large model section on the LTR page, as an alternative to 
SOLR-11250's {{DefaultWrapperModel}}; inspired by the patch on SOLR-11049 - cc 
[~cpoerschke];
# from the OpenNLP NER mention on the URP page; and
# from the OpenNLP NER URP javadocs.

Feedback is welcome.

> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Priority: Major
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-04-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445295#comment-16445295
 ] 

Steve Rowe commented on SOLR-4793:
--

I've attached a patch that adds {{jute.maxbuffer}} documentation to the ref 
guide page on setting up an external ZooKeeper cluster that [~ctargett] 
improved over on SOLR-12163.

I've included a links to the new section:

# from the large model section on the LTR page, as an alternative to 
SOLR-11250's {{DefaultWrapperModel}}; inspired by the patch on SOLR-11049 - cc 
[~cpoerschke];
# from the OpenNLP NER mention on the URP page; and
# from the OpenNLP NER URP javadocs.

Feedback is welcome.

> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Priority: Major
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445291#comment-16445291
 ] 

Ignacio Vera commented on LUCENE-8258:
--

I think the solution needs to check if the above and below planes actually 
intersect inside the world. I had a look into the intersection code and that is 
determine by the solution of a quadratic formula, in particular the square root 
of the solution.

In our problem because planes are perpendicular to each other and both parallel 
to one of the axes. Therefore checking if the square root of the quadratic has 
a solution is straight forward.

Attached a patch with the extra check.

 

 

  

 

 

> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper

2018-04-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-4793:
-
Attachment: SOLR-4793.patch

> Solr Cloud can't upload large config files ( > 1MB)  to Zookeeper
> -
>
> Key: SOLR-4793
> URL: https://issues.apache.org/jira/browse/SOLR-4793
> Project: Solr
>  Issue Type: Improvement
>Reporter: Son Nguyen
>Priority: Major
> Attachments: SOLR-4793.patch
>
>
> Zookeeper set znode size limit to 1MB by default. So we can't start Solr 
> Cloud with some large config files, like synonyms.txt.
> Jan Høydahl has a good idea:
> "SolrCloud is designed with an assumption that you should be able to upload 
> your whole disk-based conf folder into ZK, and that you should be able to add 
> an empty Solr node to a cluster and it would download all config from ZK. So 
> immediately a splitting strategy automatically handled by ZkSolresourceLoader 
> for large files could be one way forward, i.e. store synonyms.txt as e.g. 
> __001_synonyms.txt __002_synonyms.txt"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12250) NegativeArraySizeException on TransactionLog if previous document more than 1.9GB

2018-04-19 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-12250:
---

 Summary: NegativeArraySizeException on TransactionLog if previous 
document more than 1.9GB
 Key: SOLR-12250
 URL: https://issues.apache.org/jira/browse/SOLR-12250
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat


In TransactionLog, we have
{code:java}
bufSize = Math.min(1024*1024, lastAddSize+(lastAddSize>>3)+256);

MemOutputStream out = new MemOutputStream(new byte[bufSize]);
{code}
Note that bufSize will be a negative number if lastAddSize > 1908874127 ( which 
is around 1.9GB).

Although this seems to relate to user's error because of sending such a big 
document. But the exception is thrown for the update after the big one. 
Therefore it is better to fix the problem and solving how we can prohibit users 
from sending very big documents in other issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8258:
-
Attachment: LUCENE-8258.patch

> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12249) Grouping on a solr.TextField works in stand-alone but not in SolrCloud

2018-04-19 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-12249:
-

 Summary: Grouping on a solr.TextField works in stand-alone but not 
in SolrCloud
 Key: SOLR-12249
 URL: https://issues.apache.org/jira/browse/SOLR-12249
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.2, master (8.0)
Reporter: Erick Erickson


I didn't test this with master. Under the covers in stand-alone mode, the "min" 
function is silently  substituted for the grouping, but that's not true in 
SorlCloud mode. I broke this JIRA out separately to discuss whether it _ever_ 
makes sense to group by a tokenized text field.

Grouping by the min value in a field is at least consistent, but on a text 
field I don't think it makes sense.

I propose that we explicitly dis-allow this in both stand-alone and Cloud mode, 
especially now that there's the SortableTextField.

Comments?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12249) Grouping on a solr.TextField works in stand-alone but not in SolrCloud

2018-04-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445247#comment-16445247
 ] 

Erick Erickson commented on SOLR-12249:
---

I think these are  related-but-separate.

> Grouping on a solr.TextField works in stand-alone but not in SolrCloud
> --
>
> Key: SOLR-12249
> URL: https://issues.apache.org/jira/browse/SOLR-12249
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2, master (8.0)
>Reporter: Erick Erickson
>Priority: Minor
>
> I didn't test this with master. Under the covers in stand-alone mode, the 
> "min" function is silently  substituted for the grouping, but that's not true 
> in SorlCloud mode. I broke this JIRA out separately to discuss whether it 
> _ever_ makes sense to group by a tokenized text field.
> Grouping by the min value in a field is at least consistent, but on a text 
> field I don't think it makes sense.
> I propose that we explicitly dis-allow this in both stand-alone and Cloud 
> mode, especially now that there's the SortableTextField.
> Comments?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 553 - Still Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/553/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

16 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([D48EF89FC12C73E5:8737BA2F233DE61F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[jira] [Updated] (SOLR-12248) Grouping in SolrCloud fails if indexed="false" docValues="true" and sorted="false"

2018-04-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12248:
--
Affects Version/s: 6.6.2

> Grouping in SolrCloud fails if indexed="false" docValues="true" and 
> sorted="false"
> --
>
> Key: SOLR-12248
> URL: https://issues.apache.org/jira/browse/SOLR-12248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Erick Erickson
>Priority: Minor
>
> In SolrCloud _only_ (it works in stand-alone mode), a field defined as:
>  indexed="false"  docValues="true"  stored="false"  />
> will fail with the following error:
> java.lang.NullPointerException
> org.apache.solr.schema.BoolField.toExternal(BoolField.java:131)
> org.apache.solr.schema.BoolField.toObject(BoolField.java:142)
> org.apache.solr.schema.BoolField.toObject(BoolField.java:51)
> org.apache.solr.search.grouping.endresulttransformer.GroupedEndResultTransformer.transform(GroupedEndResultTransformer.java:72)
> org.apache.solr.handler.component.QueryComponent.groupedFinishStage(QueryComponent.java:830)
> org.apache.solr.handler.component.QueryComponent.finishStage(QueryComponent.java:793)
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:435)
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> .
> .
> curiously enough it succeeds with a field identically defined except for 
> stored="true"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12248) Grouping in SolrCloud fails if indexed="false" docValues="true" and sorted="false"

2018-04-19 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-12248:
-

 Summary: Grouping in SolrCloud fails if indexed="false" 
docValues="true" and sorted="false"
 Key: SOLR-12248
 URL: https://issues.apache.org/jira/browse/SOLR-12248
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


In SolrCloud _only_ (it works in stand-alone mode), a field defined as:


will fail with the following error:
java.lang.NullPointerException
org.apache.solr.schema.BoolField.toExternal(BoolField.java:131)
org.apache.solr.schema.BoolField.toObject(BoolField.java:142)
org.apache.solr.schema.BoolField.toObject(BoolField.java:51)
org.apache.solr.search.grouping.endresulttransformer.GroupedEndResultTransformer.transform(GroupedEndResultTransformer.java:72)
org.apache.solr.handler.component.QueryComponent.groupedFinishStage(QueryComponent.java:830)
org.apache.solr.handler.component.QueryComponent.finishStage(QueryComponent.java:793)
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:435)
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
.
.



curiously enough it succeeds with a field identically defined except for 
stored="true"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Description: 
The *memset* function copies multiple numeric arrays into memory from fields in 
an underlying TupleStream. This will be much more memory efficient then calling 
the *col* function multiple times on an in-memory list of Tuples.  Sample 
syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1),
 cols="field1, field2",
             vars="c, d",
 size=1),
e=corr(c, d))  

 {code}
 

  was:
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1),
 cols="field1, field2",
             vars="c, d",
 size=1),
e=corr(c, d))  

 {code}
 


> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch, SOLR-12159.patch, SOLR-12159.patch
>
>
> The *memset* function copies multiple numeric arrays into memory from fields 
> in an underlying TupleStream. This will be much more memory efficient then 
> calling the *col* function multiple times on an in-memory list of Tuples.  
> Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1),
>  cols="field1, field2",
>              vars="c, d",
>  size=1),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Attachment: SOLR-12159.patch

> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch, SOLR-12159.patch, SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1),
>  cols="field1, field2",
>              vars="c, d",
>  size=1),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Description: 
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1),
 cols="field1, field2",
             vars="c, d",
 size=1),
e=corr(c, d))  

 {code}
 

  was:
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
 cols="field1, field2",
             vars="c, d",
 size=1),
e=corr(c, d))  

 {code}
 


> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch, SOLR-12159.patch, SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1),
>  cols="field1, field2",
>              vars="c, d",
>  size=1),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2018-04-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445122#comment-16445122
 ] 

David Smiley commented on SOLR-11200:
-

+1 sounds great Hoss.

I don't mean to suggest we shouldn't have a toggle; it's useful to have the 
ability.

> provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
> ---
>
> Key: SOLR-11200
> URL: https://issues.apache.org/jira/browse/SOLR-11200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nawab Zada Asad iqbal
>Priority: Minor
> Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch
>
>
> This config can be useful while bulk indexing. Lucene introduced it 
> https://issues.apache.org/jira/browse/LUCENE-6119 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Description: 
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
 cols="field1, field2",
             vars="c, d",
 size=1),
e=corr(c, d))  

 {code}
 

  was:
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
 cols="field1, field2",
             vars="c, d"),
e=corr(c, d))  

 {code}
 


> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch, SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
>  cols="field1, field2",
>              vars="c, d",
>  size=1),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+5) - Build # 7276 - Still Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7276/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([82B9BAD886814A47:BB370398A97E83B9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:841)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Attachment: SOLR-12159.patch

> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch, SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
>  cols="field1, field2",
>              vars="c, d"),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 574 - Still Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/574/

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes

Error Message:
Error from server at https://127.0.0.1:33352/solr: KeeperErrorCode = Session 
expired for /configs/conf

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:33352/solr: KeeperErrorCode = Session expired 
for /configs/conf
at 
__randomizedtesting.SeedInfo.seed([4E5AAEDE829F7C43:D06FCA26A4BC30CB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes(TestDeleteCollectionOnDownNodes.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-04-19 Thread Rupa Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445039#comment-16445039
 ] 

Rupa Shankar commented on SOLR-11277:
-

Updated patch based on Tomás' comments (thanks!) and opened a GitHub PR here: 
https://github.com/apache/lucene-solr/pull/358

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> max_size_auto_commit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #358: SOLR-11277: Add auto hard commit setting base...

2018-04-19 Thread rupss
GitHub user rupss opened a pull request:

https://github.com/apache/lucene-solr/pull/358

SOLR-11277: Add auto hard commit setting based on tlog size



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rupss/lucene-solr auto_hard_commit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/358.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #358


commit feeefed1d749ca90d149d5af045c9af79dd9a6c7
Author: Rupa Shankar 
Date:   2018-04-20T00:13:02Z

SOLR-11277: Add auto hard commit setting based on tlog size




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9613) core or collection -> dataimport dangerous default

2018-04-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445033#comment-16445033
 ] 

Shawn Heisey edited comment on SOLR-9613 at 4/20/18 12:13 AM:
--

SOLR-11933 unchecked the clean checkbox by default for both import types.  I 
don't think that was the right thing to do.  [~msporleder]'s idea seems much 
better to me.  I think we should re-open this issue and implement it.

What does everyone think about this:  If the user has actually clicked on the 
clean checkbox, set a flag that so that the checkbox will remain in the 
selected state even if the import type is changed.



was (Author: elyograg):
SOLR-11933 unchecked the clean checkbox by default for both import types.  I 
don't think that was the right thing to do.  [~msporleder]'s idea seems much 
better to me.  I think we should re-open this issue and implement it.

A better option would be to update the checkbox to the value appropriate for 
the type of import selected.

What does everyone think about this:  If the user has actually clicked on the 
clean checkbox, set a flag that so that the checkbox will remain in the 
selected state even if the import type is changed.


> core or collection -> dataimport dangerous default
> --
>
> Key: SOLR-9613
> URL: https://issues.apache.org/jira/browse/SOLR-9613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Matthew Sporleder
>Assignee: Alexandre Rafalovitch
>Priority: Major
> Fix For: 7.3
>
>
> When browsing to dataimport in the web gui and selecting "delta-import" from 
> the drop down, the "full-import" checkbox selections stay checked, including 
> "clean", which is very dangerous for a delta-import, as it deletes most of 
> your data!
> a js event to clear those checkboxes on selection from that dropdown would 
> save a lot of accidental anguish.
> {code}
> var foo = document.getElementById("command")
> function bar() { document.getElementById("clean").checked = false; }
> foo.onchange = function() {
>   if (foo.value == "delta-import")
> {
>   bar();
> }
> };
> {code}
> or whatever



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9613) core or collection -> dataimport dangerous default

2018-04-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445033#comment-16445033
 ] 

Shawn Heisey commented on SOLR-9613:


SOLR-11933 unchecked the clean checkbox by default for both import types.  I 
don't think that was the right thing to do.  [~msporleder]'s idea seems much 
better to me.  I think we should re-open this issue and implement it.

A better option would be to update the checkbox to the value appropriate for 
the type of import selected.

What does everyone think about this:  If the user has actually clicked on the 
clean checkbox, set a flag that so that the checkbox will remain in the 
selected state even if the import type is changed.


> core or collection -> dataimport dangerous default
> --
>
> Key: SOLR-9613
> URL: https://issues.apache.org/jira/browse/SOLR-9613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Matthew Sporleder
>Assignee: Alexandre Rafalovitch
>Priority: Major
> Fix For: 7.3
>
>
> When browsing to dataimport in the web gui and selecting "delta-import" from 
> the drop down, the "full-import" checkbox selections stay checked, including 
> "clean", which is very dangerous for a delta-import, as it deletes most of 
> your data!
> a js event to clear those checkboxes on selection from that dropdown would 
> save a lot of accidental anguish.
> {code}
> var foo = document.getElementById("command")
> function bar() { document.getElementById("clean").checked = false; }
> foo.onchange = function() {
>   if (foo.value == "delta-import")
> {
>   bar();
> }
> };
> {code}
> or whatever



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-04-19 Thread Rupa Shankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rupa Shankar updated SOLR-11277:

Attachment: SOLR-11277.patch

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> max_size_auto_commit.patch
>
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 564 - Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/564/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddShardWithReplicaTypeUsingPolicy

Error Message:
Could not find collection : policiesTest

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : policiesTest
at 
__randomizedtesting.SeedInfo.seed([DC3F55B648B43B85:4C6D40254578FA79]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:256)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddShardWithReplicaTypeUsingPolicy(TestPolicyCloud.java:273)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-11933) DIH gui shouldn't have "clean" be checked by default

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444936#comment-16444936
 ] 

Tomás Fernández Löbbe commented on SOLR-11933:
--

SGTM. This Jira is closed (and released), the changes must be done as part of a 
new one.

> DIH gui shouldn't have "clean" be checked by default
> 
>
> Key: SOLR-11933
> URL: https://issues.apache.org/jira/browse/SOLR-11933
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.2
>Reporter: Eric Pugh
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: fef1d06a2eb15a0fd36eb91124af413a19d95528.diff
>
>
> The DIH webapp by default has the "clean" checkbox enabled.   Clean is very 
> dangerous because you delete all the data first, and then load the data.   
> Making this the default choice is bad UX.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Description: 
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
 cols="field1, field2",
             vars="c, d"),
e=corr(c, d))  

 {code}
 

  was:
The *memset* function will copy multiple numeric arrays into memory from fields 
in an underlying TupleStream. This will be much more memory efficient then 
calling the *col* function multiple times on an in-memory list of Tuples.  
Sample syntax:
{code:java}
let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
 copy="field1, field2",
             vars="c, d"),
e=corr(c, d))  

 {code}
 


> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
>  cols="field1, field2",
>              vars="c, d"),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-12159:
-

Assignee: Joel Bernstein

> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
>  copy="field1, field2",
>              vars="c, d"),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12159) Add memset Stream Evaluator

2018-04-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12159:
--
Attachment: SOLR-12159.patch

> Add memset Stream Evaluator
> ---
>
> Key: SOLR-12159
> URL: https://issues.apache.org/jira/browse/SOLR-12159
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12159.patch
>
>
> The *memset* function will copy multiple numeric arrays into memory from 
> fields in an underlying TupleStream. This will be much more memory efficient 
> then calling the *col* function multiple times on an in-memory list of 
> Tuples.  Sample syntax:
> {code:java}
> let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=5),
>  copy="field1, field2",
>              vars="c, d"),
> e=corr(c, d))  
>  {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2018-04-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444900#comment-16444900
 ] 

Hoss Man commented on SOLR-11200:
-

we could potentially do that with an autoscalling trigger – but that doesn't 
really negate the value of having an explicit setting for this. 

 

If the settings is true/false then force the underlying CMS value to true/false 
... down the road, if someone wants to try writting a trigger that monitors 
disk IO and the search rate, that trigger could be designed to respect the 
explicit setting if set (and be a NoopTrigger) – but if the setting is 
unspecified,  then dynmaically toggle the underlying value on the CMS directly 
as the trigger condition happens.

> provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
> ---
>
> Key: SOLR-11200
> URL: https://issues.apache.org/jira/browse/SOLR-11200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nawab Zada Asad iqbal
>Priority: Minor
> Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch
>
>
> This config can be useful while bulk indexing. Lucene introduced it 
> https://issues.apache.org/jira/browse/LUCENE-6119 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 532 - Still Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/532/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1009/consoleText

[repro] Revision: 42da6f795d8cd68891845f20201a902f7da4c579

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=MetricTriggerIntegrationTest 
-Dtests.method=testMetricTrigger -Dtests.seed=BA52D42FF4FED846 
-Dtests.multiplier=2 -Dtests.locale=en -Dtests.timezone=America/Kralendijk 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=AssignBackwardCompatibilityTest 
-Dtests.method=test -Dtests.seed=BA52D42FF4FED846 -Dtests.multiplier=2 
-Dtests.locale=mt-MT -Dtests.timezone=Asia/Hong_Kong -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=NodeAddedTriggerTest 
-Dtests.method=testRestoreState -Dtests.seed=BA52D42FF4FED846 
-Dtests.multiplier=2 -Dtests.locale=nl-NL -Dtests.timezone=Africa/Kampala 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=BA52D42FF4FED846 -Dtests.multiplier=2 
-Dtests.locale=id-ID -Dtests.timezone=Asia/Pyongyang -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5ef43e900f8abeeb56cb9bba8ca1d050ec956f21
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 42da6f795d8cd68891845f20201a902f7da4c579

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AssignBackwardCompatibilityTest
[repro]   NodeAddedTriggerTest
[repro]   IndexSizeTriggerTest
[repro]   MetricTriggerIntegrationTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.AssignBackwardCompatibilityTest|*.NodeAddedTriggerTest|*.IndexSizeTriggerTest|*.MetricTriggerIntegrationTest"
 -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=BA52D42FF4FED846 -Dtests.multiplier=2 -Dtests.locale=mt-MT 
-Dtests.timezone=Asia/Hong_Kong -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 5669 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.AssignBackwardCompatibilityTest
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.MetricTriggerIntegrationTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 5ef43e900f8abeeb56cb9bba8ca1d050ec956f21

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 531 - Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/531/

[...truncated 33 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/573/consoleText

[repro] Revision: 0c542c44d9ec6204bec912a6ab138a0cfb5533d0

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=253917080D88AFBF -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=en-MT -Dtests.timezone=Asia/Makassar 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=253917080D88AFBF 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=en-MT 
-Dtests.timezone=Asia/Makassar -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5ef43e900f8abeeb56cb9bba8ca1d050ec956f21
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 0c542c44d9ec6204bec912a6ab138a0cfb5533d0

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=253917080D88AFBF -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=en-MT -Dtests.timezone=Asia/Makassar -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 5295 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 5ef43e900f8abeeb56cb9bba8ca1d050ec956f21

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 2498 - Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2498/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([1A9A3AB53E516E1E:231483F511AEA7E0]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:109)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:299)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12859 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> 590781 INFO  
(SUITE-IndexSizeTriggerTest-seed#[1A9A3AB53E516E1E]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444877#comment-16444877
 ] 

Jan Høydahl commented on SOLR-7896:
---

{quote}What I do advocate is that the html pages (except maybe a special login 
page?) be similarly protected, not because they require protection for security 
reasons, but because a set of non-functional html pages that don't work 
properly without login can only confuse the user if rendered. We should only 
show the user pages that can provide full functionality.
{quote}
Exactly. What I'm currently about to do in this issue is to add that login 
page. But since it is fully legal to configure Solr's authentication such that 
you only protect e.g. {{security-edit}} or some admin resources, while the rest 
of the system can be used anonymously, the UI should not request login until it 
is actually required.

That's what the {{WWW-Authenticate}} headers are all about. Solr auth plugins 
will already today send such headers to the client if one tries to access a 
protected resource. I have implemented an [AngularJS http 
interceptor|https://docs.angularjs.org/api/ng/service/$http#interceptors] that 
looks for code 401 and this header. The idea is that if an Ajax call results in 
401 then we'll redirect user to the login page. And we'll choose the login page 
based on the header, i.e. {{Authorization: Basic xxx}} header will cause the 
login page for basic auth etc. 

Actually it turned out not to be as straight-forward, since the browser 
actually throws up its login dialogue before our Angular app even gets the 
chance to look at the HTTP response. The solution is outlined in [this blog 
post|http://olefriis.blogspot.no/2014/01/http-basic-authentication-in-angularjs.html]
 and involves sending the {{X-Requested-With: XMLHttpRequest}} header from 
Admin UI and conditionally changing the {{WWW-Authenticate}} header for 
BasicAuth from {{Basic xxx}} to e.g. {{xBasic xxx}} so that our Angular 
intercept code understands it but not the browser. For non-Ajax clients you 
stil get the ordinary header.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Attachments: dispatchfilter-code.png
>
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: FYI Zookeeper.sync()

2018-04-19 Thread Chris Hostetter

: Thanks; I can definitely appreciate that Watchers (or more generally the
: idea of chaining async callbacks) is usually a more suitable mechanism than
: calling sync().  I've also seen some code patterns in which knowledge of

To be clear: i'm not suggesting that we *don't* need sync() calls 
anywhere, just that based on my knowledge of how we use ZK i wouldn't 
expect to see many sync() calls, ... if you see calls to getData() that 
are not inside of a Watcher callback, that smells like a potential bug ... 
but i would question if best solution to fixing any "getData() w/o sync()" 
code paths is really "add sync()" or if it's "move this code into a 
Watcher that then caches locally"


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444859#comment-16444859
 ] 

Jan Høydahl commented on SOLR-7896:
---

{quote}But now when I test I get the browser prompt on every single load of the 
Admin UI front page, triggered by the browser trying to load a static file.
{quote}
Found it. In {{web.xml}} we have an {{excludePatterns}} list that tries to 
short circuit SolrDispatchFilter/HttpSolrCall for static files:
{quote}Exclude patterns is a list of directories that would be short circuited 
by the 
 SolrDispatchFilter. It includes all Admin UI related static content.
 NOTE: It is NOT a pattern but only matches the start of the HTTP ServletPath.
{quote}
However, after the introduction of Authentication (committed four days after 
the excludePatterns actually, at 2015-05-19), the authentication logic is ran 
*before* the _excludePatterns_ check, causing e.g. BasicAuthPlugin to request 
authentication through {{WWW-Authenticate}} headers. See relevant code in 
screenshot below:

!dispatchfilter-code.png|width=550!

Moving the short circuit logic before {{authenticateRequest()}} fixed this 
part. Now the browser is allowed to load all static resources even if BasicAuth 
with blockUnknown=true is enabled. But the "/" and "/solr/" endpoints would 
still trigger authentication so I added an exclusion rule in 
{{authenticateRequest()}} right after the check for PKI path exclusion.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Attachments: dispatchfilter-code.png
>
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1807 - Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1807/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

12 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([3330DC3486F428DB:60899E8464E5BD21]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[jira] [Updated] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-04-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7896:
--
Attachment: dispatchfilter-code.png

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
> Attachments: dispatchfilter-code.png
>
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: FYI Zookeeper.sync()

2018-04-19 Thread David Smiley
Thanks; I can definitely appreciate that Watchers (or more generally the
idea of chaining async callbacks) is usually a more suitable mechanism than
calling sync().  I've also seen some code patterns in which knowledge of
the expected ZK node version can alleviate the need for doing a sync() as
well -- assuming you have an expected zk node version to do such a check.

~ David

On Thu, Apr 19, 2018 at 5:28 PM Chris Hostetter 
wrote:

>
> IIUC, the reason you don't see any calls to sync() is because Solr's use
> of ZK is mostly based on Watchers? ... so we have callback functions to be
> notified anytime something (like leaeders, overseer, cluster state,
> etc...) changes and those calbacks update local copies of that state,
> which other (less zk savy) code can read on demand ... there shouldn't
> be much "polling" of ZK data in Solr that would require syncs?
>
> : Date: Thu, 19 Apr 2018 21:16:41 +
> : From: David Smiley 
> : Reply-To: dev@lucene.apache.org
> : To: "dev@lucene.apache.org" 
> : Subject: FYI Zookeeper.sync()
> :
> : As I was contemplating how it is that I see some weird behavior while
> : working on SolrCloud stuff could happen, I started to question my basic
> : assumptions.  One assumption I held is that SolrZkClient.getData (which
> : calls  ZooKeeper.getData) would always return the most up to date
> : information from the ZK cluster.  The docs to ZooKeeper.getData say
> nothing
> : on this matter.  If it did work this way, I imagine it would need to talk
> : to at least a majority of the ZK ensemble. Now that I reed the ZK
> : Programmer's guide RE consistency guarantees --
> :
> http://zookeeper.apache.org/doc/r3.4.11/zookeeperProgrammers.html#ch_zkGuarantees
> : I
> : see reads may read stale data and that I can use ZooKeeper.sync() first
> to
> : get the latest.  Okay wow.  Interestingly we don't call this anywhere in
> : our codebase.  With this newfound realization, I think at least one place
> : pertaining to TimeRoutedAliases really wants the most up to date
> : information, so I'll need to add a call to sync.  Of course sync() should
> : be added deliberately not haphazardly; I'm sure it has overhead.
> : --
> : Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> : LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> : http://www.solrenterprisesearchserver.com
> :
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-9304) -Dsolr.ssl.checkPeerName=false ignored on master

2018-04-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444834#comment-16444834
 ] 

Hoss Man commented on SOLR-9304:


patch updated to:
* include commented out {{SOLR_SSL_CHECK_PEER_NAME}} in {{solr.in.sh}} and 
{{solr.in.cmd}}
* update both those files as well as {{enabling-ssl.adoc}} to be consistent in 
their list of settings and comments about those settings

> -Dsolr.ssl.checkPeerName=false ignored on master
> 
>
> Key: SOLR-9304
> URL: https://issues.apache.org/jira/browse/SOLR-9304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-9304-uses-deprecated.patch, SOLR-9304.patch, 
> SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, 
> SOLR-9304.patch, SOLR-9304.patch
>
>
> {{-Dsolr.ssl.checkPeerName=false}} is completely ignored on master...
> {noformat}
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> checkPeerName
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> SYS_PROP_CHECK_PEER_NAME
> ./test-framework/src/java/org/apache/solr/util/SSLTestConfig.java:  
> boolean sslCheckPeerName = 
> toBooleanDefaultIfNull(toBooleanObject(System.getProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME)),
>  true);
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9304) -Dsolr.ssl.checkPeerName=false ignored on master

2018-04-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9304:
---
Attachment: SOLR-9304.patch

> -Dsolr.ssl.checkPeerName=false ignored on master
> 
>
> Key: SOLR-9304
> URL: https://issues.apache.org/jira/browse/SOLR-9304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-9304-uses-deprecated.patch, SOLR-9304.patch, 
> SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, 
> SOLR-9304.patch, SOLR-9304.patch
>
>
> {{-Dsolr.ssl.checkPeerName=false}} is completely ignored on master...
> {noformat}
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> checkPeerName
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> SYS_PROP_CHECK_PEER_NAME
> ./test-framework/src/java/org/apache/solr/util/SSLTestConfig.java:  
> boolean sslCheckPeerName = 
> toBooleanDefaultIfNull(toBooleanObject(System.getProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME)),
>  true);
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: FYI Zookeeper.sync()

2018-04-19 Thread Chris Hostetter

IIUC, the reason you don't see any calls to sync() is because Solr's use 
of ZK is mostly based on Watchers? ... so we have callback functions to be 
notified anytime something (like leaeders, overseer, cluster state, 
etc...) changes and those calbacks update local copies of that state, 
which other (less zk savy) code can read on demand ... there shouldn't 
be much "polling" of ZK data in Solr that would require syncs?

: Date: Thu, 19 Apr 2018 21:16:41 +
: From: David Smiley 
: Reply-To: dev@lucene.apache.org
: To: "dev@lucene.apache.org" 
: Subject: FYI Zookeeper.sync()
: 
: As I was contemplating how it is that I see some weird behavior while
: working on SolrCloud stuff could happen, I started to question my basic
: assumptions.  One assumption I held is that SolrZkClient.getData (which
: calls  ZooKeeper.getData) would always return the most up to date
: information from the ZK cluster.  The docs to ZooKeeper.getData say nothing
: on this matter.  If it did work this way, I imagine it would need to talk
: to at least a majority of the ZK ensemble. Now that I reed the ZK
: Programmer's guide RE consistency guarantees --
: 
http://zookeeper.apache.org/doc/r3.4.11/zookeeperProgrammers.html#ch_zkGuarantees
: I
: see reads may read stale data and that I can use ZooKeeper.sync() first to
: get the latest.  Okay wow.  Interestingly we don't call this anywhere in
: our codebase.  With this newfound realization, I think at least one place
: pertaining to TimeRoutedAliases really wants the most up to date
: information, so I'll need to add a call to sync.  Of course sync() should
: be added deliberately not haphazardly; I'm sure it has overhead.
: -- 
: Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
: LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
: http://www.solrenterprisesearchserver.com
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12045) Move Analytics Component from contrib to core

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444829#comment-16444829
 ] 

Jan Høydahl commented on SOLR-12045:


The solution to class loader problems is not to move everything into a big fat 
core, but to solve the class loader issues :)

> Move Analytics Component from contrib to core
> -
>
> Key: SOLR-12045
> URL: https://issues.apache.org/jira/browse/SOLR-12045
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Analytics Component currently lives in contrib. Since it includes no 
> external dependencies, there is no harm in moving it into core solr.
> The analytics component would be included as a default search component and 
> the analytics handler (currently only used for analytics shard requests, 
> might be transitioned to handle user requests in the future) would be 
> included as an implicit handler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-04-19 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444820#comment-16444820
 ] 

Gus Heck commented on SOLR-7896:


{quote}Authenticating the admin UI while leaving the API unprotected is only an 
illusion of security. Everything the admin UI does can be done directly, using 
the API.
{quote}
[~elyograg] We are on the same page, and if you took anything I said to be 
recommending such a configuration, then my prose was unclear :).

What I do advocate is that the html pages (except maybe a special login page?) 
be similarly protected, not because they require protection for security 
reasons, but because a set of non-functional html pages that don't work 
properly without login can only confuse the user if rendered. We should only 
show the user pages that can provide full functionality.

A login/landing page is much more friendly than the standard browser basic auth 
pop-up so I'd say there's some value in that too, and it would potentially 
allow for a consistent experience across any auth mechanism that didn't 
fundamentally require a redirect to an external auth provider login.

I do think it would be good to have Solr password protected by default, with 
command line switch to start it in legacy "open" mode if the server has not 
previously protected by authentication. The "please set a password" dance on 
first startup would also be user friendly, and this should set the password for 
both the UI files and the API. If solr has been configured to run it's auth vs 
Kerberos, LDAP, SiteMinder or a database etc, the config for that should 
specify if solr has write access to that backend and skip the the set password  
dance if access is read-only.
{quote}By the time Solr starts, all interface binding is already done by the 
servlet container.
{quote}
As far as things happening during startup of "the web container" that should be 
entirely under our control now since we now supply the jetty container. Running 
as a war file in arbitrary containers is not supported anymore.

 

 

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



FYI Zookeeper.sync()

2018-04-19 Thread David Smiley
As I was contemplating how it is that I see some weird behavior while
working on SolrCloud stuff could happen, I started to question my basic
assumptions.  One assumption I held is that SolrZkClient.getData (which
calls  ZooKeeper.getData) would always return the most up to date
information from the ZK cluster.  The docs to ZooKeeper.getData say nothing
on this matter.  If it did work this way, I imagine it would need to talk
to at least a majority of the ZK ensemble. Now that I reed the ZK
Programmer's guide RE consistency guarantees --
http://zookeeper.apache.org/doc/r3.4.11/zookeeperProgrammers.html#ch_zkGuarantees
I
see reads may read stale data and that I can use ZooKeeper.sync() first to
get the latest.  Okay wow.  Interestingly we don't call this anywhere in
our codebase.  With this newfound realization, I think at least one place
pertaining to TimeRoutedAliases really wants the most up to date
information, so I'll need to add a call to sync.  Of course sync() should
be added deliberately not haphazardly; I'm sure it has overhead.
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-04-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12247:
--
Component/s: Tests
 AutoScaling

> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-04-19 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-12247:
-

 Summary: NodeAddedTriggerTest.testRestoreState() failure: Did not 
expect the processor to fire on first run!
 Key: SOLR-12247
 URL: https://issues.apache.org/jira/browse/SOLR-12247
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


100% reproducing seed from 
[https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:

{noformat}
Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
(refs/remotes/origin/branch_7x)
[...]
   [smoker][junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
-Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
-Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [smoker][junit4] FAILURE 3.38s J2 | 
NodeAddedTriggerTest.testRestoreState <<<
   [smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
expect the processor to fire on first run! event={
   [smoker][junit4]>   "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
   [smoker][junit4]>   "source":"node_added_trigger",
   [smoker][junit4]>   "eventTime":6402590841348824,
   [smoker][junit4]>   "eventType":"NODEADDED",
   [smoker][junit4]>   "properties":{
   [smoker][junit4]> "eventTimes":[6402590841348824],
   [smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
   [smoker][junit4]>at 
__randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
   [smoker][junit4]>at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
   [smoker][junit4]>at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
   [smoker][junit4]>at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
   [smoker][junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [smoker][junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
sim=RandomSimilarity(queryNorm=true): {}, locale=fr-BE, timezone=MIT
   [smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1009 - Still Failing

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1009/

No tests ran.

Build Log:
[...truncated 24176 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2198 links (1754 relative) to 3020 anchors in 244 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444733#comment-16444733
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b in lucene-solr's branch 
refs/heads/branch_7x from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3d21fda ]

SOLR-12028: BadApple and AwaitsFix annotations usage

(cherry picked from commit 5ef43e9)


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444712#comment-16444712
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 5ef43e900f8abeeb56cb9bba8ca1d050ec956f21 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5ef43e9 ]

SOLR-12028: BadApple and AwaitsFix annotations usage


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 573 - Still Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/573/

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
waitFor not elapsed but produced an event

Stack Trace:
java.lang.AssertionError: waitFor not elapsed but produced an event
at 
__randomizedtesting.SeedInfo.seed([253917080D88AFBF:46F2218A9447DC92]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
events: [CapturedEvent{timestamp=24415023608107108, stage=STARTED, 
actionName='null', event={   "id":"56bd53430985d8T2vpfm8neyf9cwl8lnmwvj8qwe",   

[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-04-19 Thread Elizabeth Haubert (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444682#comment-16444682
 ] 

Elizabeth Haubert commented on SOLR-12243:
--

I think the problem is EdismaxQParser:: ln 1393.  super.getFieldQuery returns a 
SpanNearQuery, which is not an instanceof type BooleanQuery or PhraseQuery or 
MultiPhraseQuery.  So the query parser decides it didn't get a legit phrase 
query back, and throws it out. 

What is the history around what query types are allowed as a phrase query and 
which aren't? 

private Query getQuery() {
 try {
 
 switch (type) {
 case FIELD: // fallthrough
 case PHRASE:
 Query query;
 if (val == null) {
 query = super.getFieldQuery(field, vals, false);
 } else {
 query = super.getFieldQuery(field, val, type == QType.PHRASE, false);
 }
 // Boolean query on a whitespace-separated string
 // If these were synonyms we would have a SynonymQuery
 if (query instanceof BooleanQuery) {
 BooleanQuery bq = (BooleanQuery) query;
 query = SolrPluginUtils.setMinShouldMatch(bq, minShouldMatch, false);
 }
 if (query instanceof PhraseQuery) {
 PhraseQuery pq = (PhraseQuery)query;
 if (minClauseSize > 1 && pq.getTerms().length < minClauseSize) return null;
 PhraseQuery.Builder builder = new PhraseQuery.Builder();
 Term[] terms = pq.getTerms();
 int[] positions = pq.getPositions();
 for (int i = 0; i < terms.length; ++i) {
 builder.add(terms[i], positions[i]);
 }
 builder.setSlop(slop);
 query = builder.build();
 } else if (query instanceof MultiPhraseQuery) {
 MultiPhraseQuery mpq = (MultiPhraseQuery)query;
 if (minClauseSize > 1 && mpq.getTermArrays().length < minClauseSize) return 
null;
 if (slop != mpq.getSlop()) {
 query = new MultiPhraseQuery.Builder(mpq).setSlop(slop).build();
 }
 *} else if (minClauseSize > 1) {*
 *// if it's not a type of phrase query, it doesn't meet the minClauseSize 
requirements*
 *return null;*
 *}*

 

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Priority: Major
>
> synonyms.txt:
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> request handler:
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
>  
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6286) TestReplicationHandler.doTestReplicateAfterCoreReload reliably reproducing seed failures comparing master commits before/after reload

2018-04-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6286.
--
   Resolution: Fixed
 Assignee: Steve Rowe  (was: Shalin Shekhar Mangar)
Fix Version/s: (was: 6.0)
   (was: 4.10)
   master (8.0)
   7.4

> TestReplicationHandler.doTestReplicateAfterCoreReload reliably reproducing 
> seed failures comparing master commits before/after reload
> -
>
> Key: SOLR-6286
> URL: https://issues.apache.org/jira/browse/SOLR-6286
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-6286.patch, SOLR-6286.patch
>
>
> There have been a few failures on jenkins.
> {code}
> 3 tests failed.
> REGRESSION:  
> org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload
> Error Message:
> expected:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, 
> _bta.fdx, _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}]> but 
> was:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}, 
> {indexVersion=1406477990053,generation=3,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _ypc.cfe, _ypc.cfs, _ypc.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_3]}]>
> Stack Trace:
> java.lang.AssertionError: 
> expected:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, 
> _bta.fdx, _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}]> but 
> was:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}, 
> {indexVersion=1406477990053,generation=3,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _ypc.cfe, _ypc.cfs, _ypc.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_3]}]>
> at 
> __randomizedtesting.SeedInfo.seed([E4FFCDCA8EC968BC:C128D6FAFE8166BF]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 

[jira] [Commented] (SOLR-6286) TestReplicationHandler.doTestReplicateAfterCoreReload reliably reproducing seed failures comparing master commits before/after reload

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444612#comment-16444612
 ] 

ASF subversion and git services commented on SOLR-6286:
---

Commit 46037dc67494a746857048399c02a6cf6f7a07c1 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=46037dc ]

SOLR-6286: TestReplicationHandler.doTestReplicateAfterCoreReload(): stop 
checking for identical commits before/after master core reload; and make 
non-nightly mode test 10 docs instead of 0.


> TestReplicationHandler.doTestReplicateAfterCoreReload reliably reproducing 
> seed failures comparing master commits before/after reload
> -
>
> Key: SOLR-6286
> URL: https://issues.apache.org/jira/browse/SOLR-6286
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 4.10, 6.0
>
> Attachments: SOLR-6286.patch, SOLR-6286.patch
>
>
> There have been a few failures on jenkins.
> {code}
> 3 tests failed.
> REGRESSION:  
> org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload
> Error Message:
> expected:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, 
> _bta.fdx, _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}]> but 
> was:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}, 
> {indexVersion=1406477990053,generation=3,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _ypc.cfe, _ypc.cfs, _ypc.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_3]}]>
> Stack Trace:
> java.lang.AssertionError: 
> expected:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, 
> _bta.fdx, _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}]> but 
> was:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}, 
> {indexVersion=1406477990053,generation=3,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _ypc.cfe, _ypc.cfs, _ypc.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, 

[jira] [Commented] (SOLR-6286) TestReplicationHandler.doTestReplicateAfterCoreReload reliably reproducing seed failures comparing master commits before/after reload

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444611#comment-16444611
 ] 

ASF subversion and git services commented on SOLR-6286:
---

Commit 581983fd771b443d59696edb6eeef07e4a4442b5 in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=581983f ]

SOLR-6286: TestReplicationHandler.doTestReplicateAfterCoreReload(): stop 
checking for identical commits before/after master core reload; and make 
non-nightly mode test 10 docs instead of 0.


> TestReplicationHandler.doTestReplicateAfterCoreReload reliably reproducing 
> seed failures comparing master commits before/after reload
> -
>
> Key: SOLR-6286
> URL: https://issues.apache.org/jira/browse/SOLR-6286
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 4.10, 6.0
>
> Attachments: SOLR-6286.patch, SOLR-6286.patch
>
>
> There have been a few failures on jenkins.
> {code}
> 3 tests failed.
> REGRESSION:  
> org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload
> Error Message:
> expected:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, 
> _bta.fdx, _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}]> but 
> was:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}, 
> {indexVersion=1406477990053,generation=3,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _ypc.cfe, _ypc.cfs, _ypc.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_3]}]>
> Stack Trace:
> java.lang.AssertionError: 
> expected:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, 
> _bta.fdx, _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}]> but 
> was:<[{indexVersion=1406477990053,generation=2,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _yok.cfe, _yok.cfs, _yok.si, _yp3.cfe, _yp3.cfs, _yp3.si, _yp4.cfe, _yp4.cfs, 
> _yp4.si, _yp5.cfe, _yp5.cfs, _yp5.si, _yp6.cfe, _yp6.cfs, _yp6.si, _yp7.cfe, 
> _yp7.cfs, _yp7.si, _yp8.cfe, _yp8.cfs, _yp8.si, _yp9.cfe, _yp9.cfs, _yp9.si, 
> _ypa.cfe, _ypa.cfs, _ypa.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, _ypx.cfs, _ypx.si, _ypy.cfe, 
> _ypy.cfs, _ypy.si, segments_2]}, 
> {indexVersion=1406477990053,generation=3,filelist=[_bta.fdt, _bta.fdx, 
> _bta.fnm, _bta.si, _bta_Lucene41_0.doc, _bta_Lucene41_0.tim, 
> _bta_Lucene41_0.tip, _bta_nrm.cfe, _bta_nrm.cfs, _nik.cfe, _nik.cfs, _nik.si, 
> _ypc.cfe, _ypc.cfs, _ypc.si, _ypu.cfe, _ypu.cfs, _ypu.si, _ypv.cfe, _ypv.cfs, 
> _ypv.si, _ypw.cfe, _ypw.cfs, _ypw.si, _ypx.cfe, 

[jira] [Created] (SOLR-12246) Any full recovery complains about checksum mismatch for a .liv file

2018-04-19 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12246:


 Summary: Any full recovery complains about checksum mismatch for a 
.liv file
 Key: SOLR-12246
 URL: https://issues.apache.org/jira/browse/SOLR-12246
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Any time a full recovery happens, I get a failed checksum mismatch on the 
".liv" file.
{code:java}
date time WARN  
[recoveryExecutor-3-thread-5-processing-x:collection_shard_replica 
https:host:port//solr//collection_shard_replica r:core_node69 
n:host:port_solr c:collection s:shard] ? (:) - File _2yzfn_7pc.liv did not 
match. expected checksum is 4263266717 and actual is checksum 1689291857. 
expected length is 936757 and actual length is 936757{code}
Today we download the file anyways because of this check in IndexFetcher
{code:java}
static boolean filesToAlwaysDownloadIfNoChecksums(String filename,
long size, CompareResult compareResult) {
  // without checksums to compare, we always download .si, .liv, segments_N,
  // and any very small files
  return !compareResult.checkSummed && (filename.endsWith(".si") || 
filename.endsWith(".liv")
  || filename.startsWith("segments_") || size < _100K);
}{code}
So I think a WARN here is very confusing to a user who doesn't understand the 
internals of a full recovery . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12245) DistributedUpdateProcessor doesn't set MDC in some errors

2018-04-19 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444553#comment-16444553
 ] 

Varun Thacker commented on SOLR-12245:
--

Here's another stack trace. This one mentions the replica name so we know for 
which collection is the problem for
{code:java}
date time ERROR [qtp1131184204-253907] ? (:) - Setting up to try to start 
recovery on replica https://host:port/solr/collection_shardN_replicaY/
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[]
...
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
 ~[httpclient-4.4.1.jar:4.4.1]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:311)
 ~[solr-solrj-solr-version.jar:solr-version 
593a03a2847d2e1f312bef99ce26601e982b9377 - RM - time]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:184)
 ~[solr-solrj-solr-version.jar:solr-version 
593a03a2847d2e1f312bef99ce26601e982b9377 - RM - time]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.2.jar:3.2.2]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:237)
 ~[solr-solrj-solr-version.jar:solr-version 
593a03a2847d2e1f312bef99ce26601e982b9377 - RM - time]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[]
at java.lang.Thread.run(Thread.java:748) []{code}

> DistributedUpdateProcessor doesn't set MDC in some errors
> -
>
> Key: SOLR-12245
> URL: https://issues.apache.org/jira/browse/SOLR-12245
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Varun Thacker
>Priority: Major
>
> I'm getting this error in the solr logs, but it's not possible to tell which 
> shard and collection is the request for
>  
> {code:java}
> date time ERROR [qtp1232773650-563731] ? (:) - 
> null:org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  Async exception during distributed update: Read timed out
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:972)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1911)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305){code}
> We should mention the shard / replica and collection name who is distributing 
> the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2018-04-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444514#comment-16444514
 ] 

David Smiley commented on SOLR-11200:
-

It would be nice if somehow Solr could be smart enough to know when this 
setting is appropriate vs not.  Otherwise we have yet another magic setting 
that expert users may or may not eventually find.  For example if there are no 
searches going on then don't throttle.  Perhaps Solr could wrap the merge 
scheduler so that when a merge is about to happen that it looks at the 
SolrIndexSearcher to get some stats.  Just a straw-man; I dunno.  Perhaps 
another similar direction is to enhance SolrIndexSearcher to close lazily if 
it's not actually getting used (I've heard of this strategy used in a forked 
Solr to reduce memory for massive # cores).  And then what we detect in this 
Merge Scheduler is quite simply if there is an active SolrIndexSearcher or not. 
 Today we always have one.  Any way, something to consider.

> provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
> ---
>
> Key: SOLR-11200
> URL: https://issues.apache.org/jira/browse/SOLR-11200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nawab Zada Asad iqbal
>Priority: Minor
> Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch
>
>
> This config can be useful while bulk indexing. Lucene introduced it 
> https://issues.apache.org/jira/browse/LUCENE-6119 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12245) DistributedUpdateProcessor doesn't set MDC in some errors

2018-04-19 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12245:


 Summary: DistributedUpdateProcessor doesn't set MDC in some errors
 Key: SOLR-12245
 URL: https://issues.apache.org/jira/browse/SOLR-12245
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.2
Reporter: Varun Thacker


I'm getting this error in the solr logs, but it's not possible to tell which 
shard and collection is the request for

 
{code:java}
date time ERROR [qtp1232773650-563731] ? (:) - 
null:org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
 Async exception during distributed update: Read timed out
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:972)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1911)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305){code}
We should mention the shard / replica and collection name who is distributing 
the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12203) Error in response for field containing date. Unexpected state.

2018-04-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1688#comment-1688
 ] 

David Smiley commented on SOLR-12203:
-

I could be wrong but I bet the bug is something related to somewhere a default 
object.toString() getting called.  That's theory is not necessarily helpful in 
finding it as such a call happens in a lot of places but in particular I've 
seen it in serialization to/from the client e.g. JavaBin.

> Error in response for field containing date. Unexpected state.
> --
>
> Key: SOLR-12203
> URL: https://issues.apache.org/jira/browse/SOLR-12203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 7.2.1, 7.3
>Reporter: Jeroen Steggink
>Priority: Minor
>
> I get the following error:
> {noformat}
> java.lang.AssertionError: Unexpected state. Field: 
> stored,indexed,tokenized,omitNorms,indexOptions=DOCSds_lastModified:2013-10-04T22:25:11Z
> at org.apache.solr.schema.DatePointField.toObject(DatePointField.java:154)
> at org.apache.solr.schema.PointField.write(PointField.java:198)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:141)
> at 
> org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:374)
> at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:748){noformat}
> I can't find out why this occurs. The weird thing is, I can't seem to find 

[jira] [Updated] (SOLR-12244) Inconsistent method names

2018-04-19 Thread KuiLIU (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KuiLIU updated SOLR-12244:
--
Summary: Inconsistent method names  (was: Inconsistent method name)

> Inconsistent method names
> -
>
> Key: SOLR-12244
> URL: https://issues.apache.org/jira/browse/SOLR-12244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: KuiLIU
>Priority: Major
>
> The following method is named as "getShardNames".
> The methods is adding "sliceName" to "shardNames", thus the method name 
> "addShardNames" should be more clear than "getShardNames" since "get" means 
> getting something.
> {code:java}
>  public static void getShardNames(Integer numShards, List shardNames) 
> {
>  if (numShards == null)
>throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, 
> "numShards" + " is a required param");
>  for (int i = 0; i < numShards; i++) {
>final String sliceName = "shard" + (i + 1);
>shardNames.add(sliceName);
>  }
>  
>}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12244) Inconsistent method name

2018-04-19 Thread KuiLIU (JIRA)
KuiLIU created SOLR-12244:
-

 Summary: Inconsistent method name
 Key: SOLR-12244
 URL: https://issues.apache.org/jira/browse/SOLR-12244
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: KuiLIU


The following method is named as "getShardNames".
The methods is adding "sliceName" to "shardNames", thus the method name 
"addShardNames" should be more clear than "getShardNames" since "get" means 
getting something.
{code:java}
 public static void getShardNames(Integer numShards, List shardNames) {
 if (numShards == null)
   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "numShards" 
+ " is a required param");
 for (int i = 0; i < numShards; i++) {
   final String sliceName = "shard" + (i + 1);
   shardNames.add(sliceName);
 }
 
   }
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11933) DIH gui shouldn't have "clean" be checked by default

2018-04-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1678#comment-1678
 ] 

David Smiley commented on SOLR-11933:
-

+1 to Shawn's point

> DIH gui shouldn't have "clean" be checked by default
> 
>
> Key: SOLR-11933
> URL: https://issues.apache.org/jira/browse/SOLR-11933
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.2
>Reporter: Eric Pugh
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: fef1d06a2eb15a0fd36eb91124af413a19d95528.diff
>
>
> The DIH webapp by default has the "clean" checkbox enabled.   Clean is very 
> dangerous because you delete all the data first, and then load the data.   
> Making this the default choice is bad UX.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-04-19 Thread Elizabeth Haubert (JIRA)
Elizabeth Haubert created SOLR-12243:


 Summary: Edismax missing phrase queries when phrases contain 
multiterm synonyms
 Key: SOLR-12243
 URL: https://issues.apache.org/jira/browse/SOLR-12243
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 7.1
 Environment: RHEL, MacOS X

Do not believe this is environment-specific.
Reporter: Elizabeth Haubert


synonyms.txt:

allergic, hypersensitive

aspirin, acetylsalicylic acid

dog, canine, canis familiris, k 9

rat, rattus

request handler:


 

 edismax
  0.4
 title^100
 title~20^5000
 title~11
 title~22^1000
 text
 
 3-1 6-3 930%
 *:*
 25

 

Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the above 
list will not be generated.

"allergic reaction dog" will generate pf2: "allergic reaction", but not 
pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction dog"

"aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
dose" or pf3:"aspirin dose ?"

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12242) UnifiedHighlighter does not work with Surround query parser (SurroundQParser)

2018-04-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1602#comment-1602
 ] 

David Smiley commented on SOLR-12242:
-

Thanks for raising this issue.  It's similar to LUCENE-7757 which is support 
for the ComplexPhraseQParserPlugin; check that issue out.  It's temping to pass 
the live IndexSearcher to extractTerms but that's potentially dangerous if the 
query contains wildcards – we *don't* want to extract all terms from a wildcard 
query on the index reader.

> UnifiedHighlighter does not work with Surround query parser (SurroundQParser)
> -
>
> Key: SOLR-12242
> URL: https://issues.apache.org/jira/browse/SOLR-12242
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 7.2.1
>Reporter: Andy Liu
>Priority: Major
> Attachments: TestUnifiedHighlighterSurround.java
>
>
> I'm attempting to use the UnifiedHighlighter in conjunction with queries 
> parsed by Solr's SurroundQParserPlugin. When doing so, the response yields 
> empty arrays for documents that should contain highlighted snippets.
> I've attached a test for UnifiedHighlighter that uses the surround's 
> QueryParser and preprocesses the query in a similar fashion as 
> SurroundQParser, which results in test failure.  When creating a SpanQuery 
> directly (rather via surround's QueryParser), the test passes.
> The problem can be isolated to the code path initiated by 
> UnifiedHighlighter.extractTerms(), which uses EMPTY_INDEXSEARCHER to extract 
> terms from the query. After a series of method calls, we end up at 
> DistanceQuery.getSpanNearQuery(), where 
> {{((DistanceSubQuery)sqi.next()).addSpanQueries(sncf)}} fails silently and 
> doesn't add any span queries.  
> Another data point: If I hack UnifiedHighlighter and pass in a live 
> IndexSearcher to extractTerms(), highlighting works. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9304) -Dsolr.ssl.checkPeerName=false ignored on master

2018-04-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9304:
---
Attachment: SOLR-9304.patch

> -Dsolr.ssl.checkPeerName=false ignored on master
> 
>
> Key: SOLR-9304
> URL: https://issues.apache.org/jira/browse/SOLR-9304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-9304-uses-deprecated.patch, SOLR-9304.patch, 
> SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, 
> SOLR-9304.patch
>
>
> {{-Dsolr.ssl.checkPeerName=false}} is completely ignored on master...
> {noformat}
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> checkPeerName
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> SYS_PROP_CHECK_PEER_NAME
> ./test-framework/src/java/org/apache/solr/util/SSLTestConfig.java:  
> boolean sslCheckPeerName = 
> toBooleanDefaultIfNull(toBooleanObject(System.getProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME)),
>  true);
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9304) -Dsolr.ssl.checkPeerName=false ignored on master

2018-04-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444395#comment-16444395
 ] 

Hoss Man commented on SOLR-9304:


I think the fix approach in this patch looks correct, allthough 2 related 
things bother me regarding the testing of this issue...

# the only tests added are reflection based inspection of the final 
SchemaRegistery -- which not only means they'll be brittle if/when we upgrade 
commons-http, but it also means that we're not actaully testing that 
{{checkPeerNames==false}} does what we say it does.  We assert that 
{{HttpClientUtil.getSchemaRegisteryProvider().getSchemaRegistry().lookup("https")}}
 is a {{ConnectionSocketFactory}} that uses {{NoopHostnameVerifier}}, but that 
doesn't prove prove that invalid hostnames will ignored when that property is 
set.  (Somewhere down the road either the solr code or the http-commons could 
be refactored so that that code is irelevant)
# It makes no sense that {{SSLTestConfig}} is checking the value of 
{{System.getProperty(HttpClientUtil.SYS_PROP_CHECK_PEER_NAME)}} -- this 
completely predates this patch, and as far as I can tell is a blatent bug 
introduced by SOLR-4509 as part of that refactoring, but we should address it 
here.  The behavior of all our SSL testing should be deterministic regardless 
of what env/sys-props the user has set.

I'm about to attach an updated version of the patch with some improvements to 
address these concerns...

* minor refactoring to HttpClientUtilTest to reduce duplication
* re-add {{create-keystores.sh}}
** this is the script that creates the keystore our SSL testing uses, and it 
appears that i removed this in SOLR-10791
** it really should have been moved to {{solr/test-framework/src/resources/}} 
prior to that (when the original keystore location was copied/moved).
* improve {{create-keystores.sh}} so that it generates 2 different keystores:
** (the existing) keystore that uses "localhost" and the loopback IP
** another (new) keystore that uses bogus hostname/ip combo that should fail 
peer name validation on any machine.
* Add an option to {{SSLTestConfig}} to make peer name validation configurable, 
and pick the keystore to use based on that choice.
** When SSLTestConfig's {{checkPeerName=true}}, the config will use the 
exsiting "localhost" keystore
** if it's {{checkPeerName=false}} the (new) keystore containing the bogus 
hostname/ip combo will be used to ensure that all the SSL client code truly is 
ignoring the peer name in the cert.
* Change {{SSLTestConfig}} so that by default it does *NOT* do peer name 
validation
** this is technically a change in the default testing behavior, but in my 
opinion a minor one since in the past it was only ever validating "localhost"
** if anything it now means less false negatives if someone has "localhost" 
configured improperly on their machine.
*** we could potentially randomize this as part of that {{@RandomizeSSL}} 
annotation -- i personally don't see a lot of value in doing that, but i'm open 
to it if other people feel strongly.
* Add 2 new tests to TestMiniSolrCloudClusterSSL:
** one that ensures an {{SSLTestConfig}} with {{checkPeerName=true}} is usable 
and works and clients can talk to the servers
** one that "tests the test" to ensure that if {{checkPeerName=false}} and the 
servers are using our "bogus hostname cert" that a client who trust's that 
cert, but has set {{HttpClientUtil.SYS_PROP_CHECK_PEER_NAME=true}} will get an 
{{SSLException}} if it tries to talk to those servers.



I'm still doing some manual testing, but feedback appreciated.  Please note 
that because of the new (binary) keystore files,the patch was generated with 
{{git diff --staged --binary}}.  You should be able to use {{git apply}} just 
fine, but other patch based tools may not be happy with it.






> -Dsolr.ssl.checkPeerName=false ignored on master
> 
>
> Key: SOLR-9304
> URL: https://issues.apache.org/jira/browse/SOLR-9304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-9304-uses-deprecated.patch, SOLR-9304.patch, 
> SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, SOLR-9304.patch, 
> SOLR-9304.patch
>
>
> {{-Dsolr.ssl.checkPeerName=false}} is completely ignored on master...
> {noformat}
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> checkPeerName
> ./solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java:  
> public static final String SYS_PROP_CHECK_PEER_NAME = 
> "solr.ssl.checkPeerName";
> hossman@tray:~/lucene/dev/solr [master] $ find -name \*.java | xargs grep 
> SYS_PROP_CHECK_PEER_NAME
> 

[GitHub] lucene-solr issue #354: Change the method identifier from "getShardNames" to...

2018-04-19 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/354
  
You should create a Solr Jira, and then rename this PR to include the Jira 
code, that way it'll get visibility


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+5) - Build # 552 - Still Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/552/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseG1GC

15 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([1975B5B00F8395AD:4ACCF700ED920057]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:841)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 

[jira] [Commented] (SOLR-11823) Incorrect number of replica calculation when using Restore Collection API

2018-04-19 Thread Rohit (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444283#comment-16444283
 ] 

Rohit commented on SOLR-11823:
--

[~awiechers] and [~danixu86] Can you please attach Collections CREATE API 
command which you used to create the collection along with the state.json file.

> Incorrect number of replica calculation when using Restore Collection API
> -
>
> Key: SOLR-11823
> URL: https://issues.apache.org/jira/browse/SOLR-11823
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.1
>Reporter: Ansgar Wiechers
>Priority: Major
>
> I'm running Solr 7.1 (didn't test other versions) in SolrCloud mode ona a 
> 3-node cluster and tried using the backup/restore API for the first time. 
> Backup worked fine, but when trying to restore the backed-up collection I ran 
> into an unexpected problem with the replication factor setting.
> I expected the command below to restore a backup of the collection "demo" 
> with 3 shards, creating 2 replicas per shard. Instead it's trying to create 6 
> replicas per shard:
> {noformat}
> # curl -s -k 
> 'https://localhost:8983/solr/admin/collections?action=restore=demo=/srv/backup/solr/solr-dev=demo=2=2'
> {
>   "error": {
> "code": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number 
> ofavailable nodes.",
> "metadata": [
>   "error-class",
>   "org.apache.solr.common.SolrException",
>   "root-error-class",
>   "org.apache.solr.common.SolrException"
> ]
>   },
>   "exception": {
> "rspCode": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number of 
> available nodes."
>   },
>   "Operation restore caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Solr cloud with available number of nodes:3 is insufficient for restoring a 
> collection with 3 shards, total replicas per shard 6 and maxShardsPerNode 2. 
> Consider increasing maxShardsPerNode value OR number of available nodes.",
>   "responseHeader": {
> "QTime": 28,
> "status": 400
>   }
> }
> {noformat}
> Restoring a collection with only 2 shards tries to create 6 replicas as well, 
> so it looks to me like the restore API multiplies the replication factor with 
> the number of nodes, which is not how the replication factor behaves in other 
> contexts. The 
> [documentation|https://lucene.apache.org/solr/guide/7_1/collections-api.html] 
> also didn't lead me to expect this behavior:
> {quote}
> replicationFactor
>The number of replicas to be created for each shard.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 572 - Unstable

2018-04-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/572/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:545)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)  at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)  at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
  at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
  at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.cloud.Overseer
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:545)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
at 
org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393)
at 
org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([A4B187167D18FE3A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303)
at sun.reflect.GeneratedMethodAccessor77.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.ZkControllerTest: 
1) 

[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444213#comment-16444213
 ] 

Erick Erickson commented on LUCENE-7976:


bq: Just curious, how did you go about measuring that?  

First a disclaimer: the intent here was to get some idea whether things had 
blown up all out of proportion so rigor wasn't the main thrust.

Anyway, I have a client program that assembles docs then sends the same set of 
docs to two instances of Solr, one running old and one running new code. Then I 
hacked in a bit to each that prints the number of bytes being merged into each 
new segment (i.e. each of the OneMerge's each time a MergeSpecification is 
returned from TieredMergePolicy.findMerges and accumulates the total) into a 
file.

Each doc has a randomly-generated ID in a bounded range so I get deletions.

So I get output like: 
Bytes Written This Pass: 15,456,941: Accumulated Bytes Written: 16,071,461,273 
This pct del: 26, accum pct del max: 26

finally, I lowered the max segment size artificially to force lots and lots of 
merges. So there are several places it might not reflect reality.

Your simulation sounds cool, but for this case how deletes affect decisions on 
which segments to merge is a critical difference between the old and new way of 
doing things so needs to be exercised

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7275 - Still Unstable!

2018-04-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7275/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test

Error Message:
Error from server at http://127.0.0.1:58813/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58813/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([889AC4EEA761B289:CEFB34099DDF71]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test(TimeRoutedAliasUpdateProcessorTest.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-12242) UnifiedHighlighter does not work with Surround query parser (SurroundQParser)

2018-04-19 Thread Andy Liu (JIRA)
Andy Liu created SOLR-12242:
---

 Summary: UnifiedHighlighter does not work with Surround query 
parser (SurroundQParser)
 Key: SOLR-12242
 URL: https://issues.apache.org/jira/browse/SOLR-12242
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: highlighter
Affects Versions: 7.2.1
Reporter: Andy Liu
 Attachments: TestUnifiedHighlighterSurround.java

I'm attempting to use the UnifiedHighlighter in conjunction with queries parsed 
by Solr's SurroundQParserPlugin. When doing so, the response yields empty 
arrays for documents that should contain highlighted snippets.

I've attached a test for UnifiedHighlighter that uses the surround's 
QueryParser and preprocesses the query in a similar fashion as SurroundQParser, 
which results in test failure.  When creating a SpanQuery directly (rather via 
surround's QueryParser), the test passes.

The problem can be isolated to the code path initiated by 
UnifiedHighlighter.extractTerms(), which uses EMPTY_INDEXSEARCHER to extract 
terms from the query. After a series of method calls, we end up at 
DistanceQuery.getSpanNearQuery(), where 
{{((DistanceSubQuery)sqi.next()).addSpanQueries(sncf)}} fails silently and 
doesn't add any span queries.  

Another data point: If I hack UnifiedHighlighter and pass in a live 
IndexSearcher to extractTerms(), highlighting works. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-04-19 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-12163.
--
   Resolution: Fixed
Fix Version/s: master (8.0)

I ended up only adding a note that settings that can go in zookeeper-env.sh on 
*nix have to go into zkServer.cmd on Windows. I can't find any good examples of 
doing that it ZK docs, so hopefully someone will be able to give us some 
feedback based on their experience that we can add to the docs later.

> Ref Guide: Improve Setting Up an External ZK Ensemble page
> --
>
> Key: SOLR-12163
> URL: https://issues.apache.org/jira/browse/SOLR-12163
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: setting-up-an-external-zookeeper-ensemble.adoc
>
>
> I had to set up a ZK ensemble the other day for the first time in a while, 
> and thought I'd test our docs on the subject while I was at it. I headed over 
> to 
> https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
>  and...Well, I still haven't gotten back to what I was trying to do, but I 
> rewrote the entire page.
> The problem to me is that the page today is mostly a stripped down copy of 
> the ZK Getting Started docs: walking through setting up a single ZK instance 
> before introducing the idea of an ensemble and going back through the same 
> configs again to update them for the ensemble.
> IOW, despite the page being titled "setting up an ensemble", it's mostly 
> about not setting up an ensemble. That's at the end of the page, which itself 
> focuses a bit heavily on the use case of running an ensemble on a single 
> server (so, if you're counting...that's 3 use cases we don't want people to 
> use discussed in detail on a page that's supposedly about _not_ doing any of 
> those things).
> So, I took all of it and restructured the whole thing to focus primarily on 
> the use case we want people to use: running 3 ZK nodes on different machines. 
> Running 3 on one machine is still there, but noted in passing with the 
> appropriate caveats. I've also added information about choosing to use a 
> chroot, which AFAICT was only covered in the section on Taking Solr to 
> Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444159#comment-16444159
 ] 

ASF subversion and git services commented on SOLR-12163:


Commit 2defbf060564d6dafdc775c12c21ced0ad8ebc09 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2defbf0 ]

SOLR-12163: Updated and expanded ZK ensemble docs


> Ref Guide: Improve Setting Up an External ZK Ensemble page
> --
>
> Key: SOLR-12163
> URL: https://issues.apache.org/jira/browse/SOLR-12163
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: setting-up-an-external-zookeeper-ensemble.adoc
>
>
> I had to set up a ZK ensemble the other day for the first time in a while, 
> and thought I'd test our docs on the subject while I was at it. I headed over 
> to 
> https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
>  and...Well, I still haven't gotten back to what I was trying to do, but I 
> rewrote the entire page.
> The problem to me is that the page today is mostly a stripped down copy of 
> the ZK Getting Started docs: walking through setting up a single ZK instance 
> before introducing the idea of an ensemble and going back through the same 
> configs again to update them for the ensemble.
> IOW, despite the page being titled "setting up an ensemble", it's mostly 
> about not setting up an ensemble. That's at the end of the page, which itself 
> focuses a bit heavily on the use case of running an ensemble on a single 
> server (so, if you're counting...that's 3 use cases we don't want people to 
> use discussed in detail on a page that's supposedly about _not_ doing any of 
> those things).
> So, I took all of it and restructured the whole thing to focus primarily on 
> the use case we want people to use: running 3 ZK nodes on different machines. 
> Running 3 on one machine is still there, but noted in passing with the 
> appropriate caveats. I've also added information about choosing to use a 
> chroot, which AFAICT was only covered in the section on Taking Solr to 
> Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444160#comment-16444160
 ] 

ASF subversion and git services commented on SOLR-11646:


Commit 0c542c44d9ec6204bec912a6ab138a0cfb5533d0 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c542c4 ]

SOLR-11646: change tab-pane padding to align better under tabs


> Ref Guide: Update API examples to include v2 style examples
> ---
>
> Key: SOLR-11646
> URL: https://issues.apache.org/jira/browse/SOLR-11646
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, v2 API
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
>
> The Ref Guide currently only has a single page with what might be generously 
> called an overview of the v2 API added in 6.5 
> (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual 
> APIs that support the v2 approach do not show an example of using it with the 
> v2 style. A few v2-style APIs are already used as examples, but there's 
> nothing consistent.
> With this issue I'll add API input/output examples throughout the Guide. Just 
> in terms of process, my intention is to have a series of commits to the pages 
> as I work through them so we make incremental progress. I'll start by adding 
> a list of pages/APIs to this issue so the scope of the work is clear.
> Once this is done we can figure out what to do with the V2 API page itself - 
> perhaps it gets archived and replaced with another page that describes Solr's 
> APIs overall; perhaps by then we figure out something else to do with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444158#comment-16444158
 ] 

ASF subversion and git services commented on SOLR-12163:


Commit 42da6f795d8cd68891845f20201a902f7da4c579 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=42da6f7 ]

SOLR-12163: Updated and expanded ZK ensemble docs


> Ref Guide: Improve Setting Up an External ZK Ensemble page
> --
>
> Key: SOLR-12163
> URL: https://issues.apache.org/jira/browse/SOLR-12163
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: setting-up-an-external-zookeeper-ensemble.adoc
>
>
> I had to set up a ZK ensemble the other day for the first time in a while, 
> and thought I'd test our docs on the subject while I was at it. I headed over 
> to 
> https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
>  and...Well, I still haven't gotten back to what I was trying to do, but I 
> rewrote the entire page.
> The problem to me is that the page today is mostly a stripped down copy of 
> the ZK Getting Started docs: walking through setting up a single ZK instance 
> before introducing the idea of an ensemble and going back through the same 
> configs again to update them for the ensemble.
> IOW, despite the page being titled "setting up an ensemble", it's mostly 
> about not setting up an ensemble. That's at the end of the page, which itself 
> focuses a bit heavily on the use case of running an ensemble on a single 
> server (so, if you're counting...that's 3 use cases we don't want people to 
> use discussed in detail on a page that's supposedly about _not_ doing any of 
> those things).
> So, I took all of it and restructured the whole thing to focus primarily on 
> the use case we want people to use: running 3 ZK nodes on different machines. 
> Running 3 on one machine is still there, but noted in passing with the 
> appropriate caveats. I've also added information about choosing to use a 
> chroot, which AFAICT was only covered in the section on Taking Solr to 
> Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444157#comment-16444157
 ] 

ASF subversion and git services commented on SOLR-11646:


Commit aab2c770c6f934745b23f14649ce476d582f7afb in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aab2c77 ]

SOLR-11646: change tab-pane padding to align better under tabs


> Ref Guide: Update API examples to include v2 style examples
> ---
>
> Key: SOLR-11646
> URL: https://issues.apache.org/jira/browse/SOLR-11646
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, v2 API
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
>
> The Ref Guide currently only has a single page with what might be generously 
> called an overview of the v2 API added in 6.5 
> (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual 
> APIs that support the v2 approach do not show an example of using it with the 
> v2 style. A few v2-style APIs are already used as examples, but there's 
> nothing consistent.
> With this issue I'll add API input/output examples throughout the Guide. Just 
> in terms of process, my intention is to have a series of commits to the pages 
> as I work through them so we make incremental progress. I'll start by adding 
> a list of pages/APIs to this issue so the scope of the work is clear.
> Once this is done we can figure out what to do with the V2 API page itself - 
> perhaps it gets archived and replaced with another page that describes Solr's 
> APIs overall; perhaps by then we figure out something else to do with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-04-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444088#comment-16444088
 ] 

Jan Høydahl commented on SOLR-7896:
---

I was certain that Solr used to be able to load the (static) Admin UI files, 
such as {{/solr/libs/angular-resource.min.js.map }}without the browser 
prompting for authentication, if Basic Auth is enabled. But now when I test I 
get the browser prompt on every single load of the Admin UI front page, 
triggered by the browser trying to load a static file.
 
I tried with master, 7.x, 6.x and even 5.5.5 and same results. Please refresh 
my memory.
 
For this feature to work we need all static resources to be served (by Jetty or 
by Solr) to the browser without auth, and only enforce authentication on the 
Solr APIs which are called with Ajax calls from Angular. Else we'll not be able 
to throw up the nice login page before the browser throws up its ugly one :)

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: authentication, login, password
>
> Now that Solr supports Authentication plugins, the missing piece is to be 
> allowed access from Admin UI when authentication is enabled. For this we need
>  * Some plumbing in Admin UI that allows the UI to detect 401 responses and 
> redirect to login page
>  * Possibility to have multiple login pages depending on auth method and 
> redirect to the correct one
>  * [AngularJS HTTP 
> interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to 
> add correct HTTP headers on all requests when user is logged in
> This issue should aim to implement some of the plumbing mentioned above, and 
> make it work with Basic Auth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8260) Extract ReaderPool from IndexWriter

2018-04-19 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444082#comment-16444082
 ] 

Simon Willnauer commented on LUCENE-8260:
-

here is also a review PR https://github.com/s1monw/lucene-solr/pull/12/

>  Extract ReaderPool from IndexWriter
> 
>
> Key: LUCENE-8260
> URL: https://issues.apache.org/jira/browse/LUCENE-8260
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8260.diff
>
>
> ReaderPool plays a central role in the IndexWriter pooling NRT readers and 
> making sure we write buffered deletes and updates to disk. This class used to 
> be a non-static inner class accessing many aspects including locks from the 
> IndexWriter itself. This change moves the class outside of IW and defines 
> it's responsiblity in a clear way with respect to locks etc. Now IndexWriter 
> doesn't need to share ReaderPool anymore and reacts on writes done inside the 
> pool by checkpointing internally. This also removes acquiring the IW lock 
> inside the reader pool which makes reasoning about concurrency difficult.
> This change also add javadocs and dedicated tests for the ReaderPool class.
> /cc [~mikemccand] [~dawidweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11833) Allow searchRate trigger to delete replicas

2018-04-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444056#comment-16444056
 ] 

Shalin Shekhar Mangar commented on SOLR-11833:
--

bq. PULL replicas are not searchable, if I understand it correctly, so it 
doesn't make sense to include them in search rate monitoring.

They participate in searches. They don't index data but periodically pull the 
latest segments from the leader.

bq.  Rather, we could support rate but set aboveRate to this value and 
belowRate to 0 - this will reproduce the behavior from 7.2 where there was no 
belowOp.

+1

> Allow searchRate trigger to delete replicas
> ---
>
> Key: SOLR-11833
> URL: https://issues.apache.org/jira/browse/SOLR-11833
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11833.patch, SOLR-11833.patch
>
>
> Currently {{SearchRateTrigger}} generates events when search rate thresholds 
> are exceeded, and {{ComputePlanAction}} computes ADDREPLICA actions in 
> response - adding replicas should allow the search rate to be reduced across 
> the increased number of replicas.
> However, once the peak load period is over the collection is left with too 
> many replicas, which unnecessarily tie cluster resources. 
> {{SearchRateTrigger}} should detect situations like this and generate events 
> that should cause some of these replicas to be deleted.
> {{SearchRateTrigger}} should use hysteresis to avoid thrashing when the rate 
> is close to the threshold.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8260) Extract ReaderPool from IndexWriter

2018-04-19 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8260:

Attachment: LUCENE-8260.diff

>  Extract ReaderPool from IndexWriter
> 
>
> Key: LUCENE-8260
> URL: https://issues.apache.org/jira/browse/LUCENE-8260
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8260.diff
>
>
> ReaderPool plays a central role in the IndexWriter pooling NRT readers and 
> making sure we write buffered deletes and updates to disk. This class used to 
> be a non-static inner class accessing many aspects including locks from the 
> IndexWriter itself. This change moves the class outside of IW and defines 
> it's responsiblity in a clear way with respect to locks etc. Now IndexWriter 
> doesn't need to share ReaderPool anymore and reacts on writes done inside the 
> pool by checkpointing internally. This also removes acquiring the IW lock 
> inside the reader pool which makes reasoning about concurrency difficult.
> This change also add javadocs and dedicated tests for the ReaderPool class.
> /cc [~mikemccand] [~dawidweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8259) Extract ReaderPool from IndexWriter

2018-04-19 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8259:
---

 Summary:  Extract ReaderPool from IndexWriter
 Key: LUCENE-8259
 URL: https://issues.apache.org/jira/browse/LUCENE-8259
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.4, master (8.0)
Reporter: Simon Willnauer
 Fix For: 7.4, master (8.0)
 Attachments: extract_reader_pool.diff

ReaderPool plays a central role in the IndexWriter pooling NRT readers and 
making sure we write buffered deletes and updates to disk. This class used to 
be a non-static inner class accessing many aspects including locks from the 
IndexWriter itself. This change moves the class outside of IW and defines it's 
responsiblity in a clear way with respect to locks etc. Now IndexWriter doesn't 
need to share ReaderPool anymore and reacts on writes done inside the pool by 
checkpointing internally. This also removes acquiring the IW lock inside the 
reader pool which makes reasoning about concurrency difficult.

This change also add javadocs and dedicated tests for the ReaderPool class.

/cc [~mikemccand] [~dawidweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8260) Extract ReaderPool from IndexWriter

2018-04-19 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8260:
---

 Summary:  Extract ReaderPool from IndexWriter
 Key: LUCENE-8260
 URL: https://issues.apache.org/jira/browse/LUCENE-8260
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.4, master (8.0)
Reporter: Simon Willnauer
 Fix For: 7.4, master (8.0)
 Attachments: LUCENE-8260.diff

ReaderPool plays a central role in the IndexWriter pooling NRT readers and 
making sure we write buffered deletes and updates to disk. This class used to 
be a non-static inner class accessing many aspects including locks from the 
IndexWriter itself. This change moves the class outside of IW and defines it's 
responsiblity in a clear way with respect to locks etc. Now IndexWriter doesn't 
need to share ReaderPool anymore and reacts on writes done inside the pool by 
checkpointing internally. This also removes acquiring the IW lock inside the 
reader pool which makes reasoning about concurrency difficult.

This change also add javadocs and dedicated tests for the ReaderPool class.

/cc [~mikemccand] [~dawidweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444016#comment-16444016
 ] 

Ignacio Vera commented on LUCENE-8258:
--

Thanks [~kwri...@metacarta.com], yes I think we should keep it open. I will 
have some time to look into it, I will report any findings.

> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444014#comment-16444014
 ] 

Karl Wright commented on LUCENE-8258:
-

I committed what I had; it makes the test pass, but I have reservations that it 
may cause failures in other situations, so I don't think we're ready to close 
this ticket.


> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12241) Error in proximity parsing when _query_ with complex phrase proceeds normal query containing proximity

2018-04-19 Thread John Stratoulis (JIRA)
John Stratoulis created SOLR-12241:
--

 Summary: Error in proximity parsing when _query_ with complex 
phrase proceeds normal query containing proximity
 Key: SOLR-12241
 URL: https://issues.apache.org/jira/browse/SOLR-12241
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 7.0.1, 6.0
 Environment: Ubuntu 16.04
Reporter: John Stratoulis


Lets say I am using a query like this:

 
{code:java}
_query_:"{!complexphrase inOrder=false}\"ernst εταιρ* συμβουλ*\"" || "ey 
επενδυτικη"~10
{code}
The result is not as expected.

Using the debugger I can find that it transforms to it:

 
{code:java}
"rawquerystring":"_query_:\"{!complexphrase inOrder=false}\\\"ernst εταιρ* 
συμβουλ*\\\"\" || \"ey επενδυτικη\"~10",
 "querystring":"_query_:\"{!complexphrase inOrder=false}\\\"ernst εταιρ* 
συμβουλ*\\\"\" || \"ey επενδυτικη\"~10",
 "parsedquery":"ComplexPhraseQuery(\"ernst εταιρ* συμβουλ*\") 
PhraseQuery(default_text:\"ey επενδυτικη\")",
 "parsedquery_toString":"\"ernst εταιρ* συμβουλ*\" default_text:\"ey 
επενδυτικη\"",{code}
If i reverse the query to make it like this:

 
{code:java}
"ey επενδυτικη"~10 || _query_:"{!complexphrase inOrder=false}\"ernst εταιρ* 
συμβουλ*\""
{code}
The result is ok.

 
{code:java}
"rawquerystring":"\"ey επενδυτικη\"~10 || _query_:\"{!complexphrase 
inOrder=false}\\\"ernst εταιρ* συμβουλ*\\\"\"",
 "querystring":"\"ey επενδυτικη\"~10 || _query_:\"{!complexphrase 
inOrder=false}\\\"ernst εταιρ* συμβουλ*\\\"\"",
 "parsedquery":"PhraseQuery(default_text:\"ey επενδυτικη\"~10) 
ComplexPhraseQuery(\"ernst εταιρ* συμβουλ*\")",
 "parsedquery_toString":"default_text:\"ey επενδυτικη\"~10 \"ernst εταιρ* 
συμβουλ*\"",{code}
The same happens if I transform it, like this:
{code:java}
_query_:"{!complexphrase inOrder=false}\"ernst εταιρ* συμβουλ*\"" || 
_query_:"\"ey επενδυτικη\"~10"{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444008#comment-16444008
 ] 

ASF subversion and git services commented on LUCENE-8258:
-

Commit a61018fd99875f9a280924f6943fef4d797ca7ca in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a61018f ]

LUCENE-8258: Tighten rejection of travel planes that are too close to an edge.  
Note: this may cause failures in some cases; haven't seen it, but if that 
happens, the logic will need to change instead of just the cutoff.


> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444007#comment-16444007
 ] 

ASF subversion and git services commented on LUCENE-8258:
-

Commit f3e0fab70a9e115ea86bf9b4ee42b702e335c9cc in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f3e0fab ]

LUCENE-8258: Tighten rejection of travel planes that are too close to an edge.  
Note: this may cause failures in some cases; haven't seen it, but if that 
happens, the logic will need to change instead of just the cutoff.


> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8258) GeoComplexPolygon fails computing traversals

2018-04-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444005#comment-16444005
 ] 

ASF subversion and git services commented on LUCENE-8258:
-

Commit a033759f127cec8137351a47dc4f6703941eab01 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a033759 ]

LUCENE-8258: Tighten rejection of travel planes that are too close to an edge.  
Note: this may cause failures in some cases; haven't seen it, but if that 
happens, the logic will need to change instead of just the cutoff.


> GeoComplexPolygon fails computing traversals
> 
>
> Key: LUCENE-8258
> URL: https://issues.apache.org/jira/browse/LUCENE-8258
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Major
> Attachments: LUCENE-8258.jpg, LUCENE-8258.patch
>
>
> There are some situations when checking for memebership for a 
> GeoComplexPolygon results in the following error:
> {{java.lang.IllegalArgumentException: No off-plane intersection points were 
> found; can't compute traversal}}
> It seems the intersection of auxiliary planes created is outside of the world.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >