[JENKINS-MAVEN] Lucene-Solr-Maven-7.4 #10: POMs out of sync

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.4/10/

No tests ran.

Build Log:
[...truncated 19398 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.4/build.xml:672: The 
following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.4/build.xml:209: The 
following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.4/lucene/build.xml:425:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.4/lucene/common-build.xml:2264:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.4/lucene/common-build.xml:1720:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.4/lucene/common-build.xml:650:
 Error deploying artifact 'org.apache.lucene:lucene-benchmark:jar': Error 
installing artifact's metadata: repository metadata for: 'artifact 
org.apache.lucene:lucene-benchmark' could not be retrieved from repository: 
apache.snapshots.https due to an error: Error transferring file: Connection 
refused (Connection refused)

Total time: 14 minutes 18 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1052 - Still Failing

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1052/

No tests ran.

Build Log:
[...truncated 24156 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2243 links (1792 relative) to 3127 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-S

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1927 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1927/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:62593/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:51212/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:52727/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:62593/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:51212/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:52727/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([3E9C850F15323EFC:945156FDA2E1EB2C]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22301 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22301/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=10709, 
name=cdcr-replicator-3446-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=10709, name=cdcr-replicator-3446-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([27F8202107443866]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13442 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 726810 INFO  
(SUITE-CdcrBidirectionalTest-seed#[27F8202107443866]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_27F8202107443866-001/init-core-data-001
   [junit4]   2> 726813 WARN  
(SUITE-CdcrBidirectionalTest-seed#[27F8202107443866]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=25 numCloses=25
   [junit4]   2> 726813 INFO  
(SUITE-CdcrBidirectionalTest-seed#[27F8202107443866]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 726814 INFO  
(SUITE-CdcrBidirectionalTest-seed#[27F8202107443866]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 726821 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[27F8202107443866]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 726821 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[27F8202107443866]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_27F8202107443866-001/cdcr-cluster2-001
   [junit4]   2> 726822 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[27F8202107443866]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 726834 INFO  (Thread-2493) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 726834 INFO  (Thread-2493) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 726867 ERROR (Thread-2493) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 726934 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[27F8202107443866]) [] 
o.a.s.c.ZkTestServer start zk server on port:38873
   [junit4]   2> 726967 INFO  (zkConnectionManagerCallback-2451-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 726987 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.Server jetty-9.4.10.v20180503; built: 2018-05-03T15:56:21.710Z; git: 
daa59876e6f384329b122929e70a80934569428c; jvm 10.0.1+10
   [junit4]   2> 727006 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 727006 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 727006 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.session node0 Scavenging every 60ms
   [junit4]   2> 727007 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@7c514370{/solr,null,AVAILABLE}
   [junit4]   2> 727008 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@55d7ae79{SSL,[ssl, 
http/1.1]}{127.0.0.1:৩৯৩৪৯}
   [junit4]   2> 727008 INFO  (jetty-launcher-2448-thread-1) [] 
o.e.j.s.Server Started @৭২৭০৩৩ms
   [junit4]   2> 727008 INFO  (jetty-launcher-2448-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=39349}
 

[JENKINS] Lucene-Solr-repro - Build # 869 - Unstable

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/869/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/86/consoleText

[repro] Revision: 5b5b09c83ea8ab43e6aea565bb47ce790421481e

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=D33C7566C39CF0E0 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ms 
-Dtests.timezone=MST7MDT -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWatchesWorkForStateFormat1 -Dtests.seed=AE633F41204C8B5E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=nl-NL -Dtests.timezone=Australia/Hobart -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testSimpleCollectionWatch -Dtests.seed=AE633F41204C8B5E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=nl-NL -Dtests.timezone=Australia/Hobart -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWaitForStateWatcherIsRetainedOnPredicateFailure 
-Dtests.seed=AE633F41204C8B5E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=nl-NL -Dtests.timezone=Australia/Hobart 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
3b9d3a760a432b97aad2c08b2f778fa2344eb14a
[repro] git fetch
[repro] git checkout 5b5b09c83ea8ab43e6aea565bb47ce790421481e

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 2468 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=AE633F41204C8B5E -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=nl-NL -Dtests.timezone=Australia/Hobart 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 1848 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1331 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=D33C7566C39CF0E0 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ms -Dtests.timezone=MST7MDT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 24868 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=D33C7566C39CF0E0 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ms -Dtests.timezone=MST7MDT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 13394 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ms 
-Dtests.timezone=MST7MDT -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 22327 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 3b9d3a760a432b97aad2c08b2f778fa2344eb14a

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 82 - Still Unstable

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/82/

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10001_solr, 
127.0.0.1:1_solr] Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/29)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   "core":"testMixedBounds_collection_shard2_replica_n3", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:10001_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testMixedBounds_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:1_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"stateTimestamp":"1529724483343326950",   "replicas":{ 
"core_node1":{   "core":"testMixedBounds_collection_shard1_replica_n1", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":495,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":257740,
   "node_name":"127.0.0.1:10001_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}, "core_node2":{   
"core":"testMixedBounds_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":495,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":257740,   
"node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1529724483377442800",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}, "core_node9":{   
"core":"testMixedBounds_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1529724483377075950",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}, "core_node8":{   
"core":"testMixedBounds_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}

Stack Trace:
java.lang.AssertionError: failed to create testMixedBounds_collection
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:1_solr]
Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/2

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 702 - Failure!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/702/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 12883 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/temp/junit4-J1-20180623_020923_0562326003617317267346.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error (sharedRuntime.cpp:931), pid=2538, tid=59583
   [junit4] #  guarantee(cm != NULL) failed: must have containing compiled 
method for implicit division-by-zero exceptions
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+181) (build 
9+181)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (9+181, mixed mode, 
tiered, compressed oops, g1 gc, bsd-amd64)
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1/hs_err_pid2538.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 1988 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/bin/java 
-XX:+UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=9907739C3BDF1D8C 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.5.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/temp
 -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/clover/db
 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.5.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/test-framework/lib/junit4-ant-2.6.0.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/core/src/test-files:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-7.5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-7.5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-7.5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/codecs/lucene-codecs-7.5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/backward-codecs/lucene-backward-codecs-7.5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/highlighter/lucene-highlighter-7.5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/build/memory/lucene-memory-7.5.0-SNAPSHOT.jar:/Users/je

[jira] [Commented] (SOLR-12356) Always auto-create ".system" collection when in SolrCloud mode

2018-06-22 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520946#comment-16520946
 ] 

Noble Paul commented on SOLR-12356:
---

{quote}Why? we don't care about search performance of this collection that 
much, we only care about fault tolerance. Having a replica on every node seems 
an overkill - if your cluster is likely to lose N-1 nodes you're in a deep 
trouble anyway 
{quote}
Imagine you have . an RF of 3 and you have 20 nodes. It's not uncommon to lose 
3 nodes out of 20. 
{quote}I disagree - actively hiding this from the users complicates the code 
and prevents them from understanding how it works. 
{quote}
The problem with system generated config coexisting with user created config is 
that it leads to
 * config bloat which leads to poor readability. 
 * Legacy configuration living in the cluster the user doesn't know how to 
upgrade when something changes in the framework

OTOH, if we are maintaining that configuration hidden from users , we eliminate 
this problem altogether. Another place where we apply these principles is the 
implicitly registered responseWriters, requesthandlers, functions etc . We 
could have left them in the {{solrconfig.xml}} and it would have caused the 
same problems as I mentioned above. In short, I'm not very happy to see the 
autoAddReplicas creating a huge blob of config in {{autoscaling.json}} which 
the user is left to manage 

> Always auto-create ".system" collection when in SolrCloud mode
> --
>
> Key: SOLR-12356
> URL: https://issues.apache.org/jira/browse/SOLR-12356
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Priority: Major
>
> The {{.system}} collection is currently used for blobs, and in SolrCloud mode 
> it's also used for autoscaling history and as a metrics history store 
> (SOLR-11779). It should be automatically created on Overseer start if it's 
> missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12495) Enhance the Autoscaling policy syntax to evenly distribute replicas

2018-06-22 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520943#comment-16520943
 ] 

Noble Paul commented on SOLR-12495:
---

bq.I see; could {"core": "#MINIMUM", "node": "#ANY"} be included with this 
issue? Along with per-collection balancing, we'll also need cluster-wide 
balancing.

well we already have a global preference which says 
{code}
{"minimize" : "cores"}
{code}

Is there anything that's not already addressed by that? I understand that it 
won't show any violations if you are already in an imbalanced state. 

The problem with implementing a feature like this that  you can clearly have 
conflicts if you create 2 rules as follows. This can always lead to violations 
which are impossible to satisfy
{code}
{"cores" : "#MINIMUM", "node" : "#ANY"}
{"replica" : "#MINIMUM", "shard" : "#EACH", "node" : "#ANY"}
{code}

> Enhance the Autoscaling policy syntax to evenly distribute replicas
> ---
>
> Key: SOLR-12495
> URL: https://issues.apache.org/jira/browse/SOLR-12495
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> Support a new function value for {{replica= "#MINIMUM"}}
> {{#MINIMUM}} means the minimum computed value for the given configuration
> the value of replica will be calculated as  {{<= 
> Math.ceil(number_of_replicas/number_of_valid_nodes) }}
> *example 1:*
> {code:java}
> {"replica" : "#MINIMUM" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *case 1* : nodes=3, replicationFactor=4
>  the value of replica will be calculated as {{Math.ceil(4/3) = 2}}
> current state : nodes=3, replicationFactor=2
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *case 2* : 
> current state : nodes=3, replicationFactor=2
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *example:2*
> {code}
> {"replica" : "#MINIMUM"  , "node" : "#ANY"}{code}
> case 1: numShards = 2, replicationFactor=3, nodes = 5
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "node" : "#ANY"}
> {code}
> *example:3*
> {code}
> {"replica" : "<2"  , "shard" : "#EACH" , "port" : "8983"}{code}
> case 1: {{replicationFactor=3, nodes with port 8983 = 2}}
> this is equivalent to the hard coded rule
> {code}
> {"replica" : "<3"  , "shard" : "#EACH" , "port" : "8983"}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-22 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520919#comment-16520919
 ] 

Noble Paul edited comment on SOLR-11985 at 6/23/18 1:53 AM:


bq. for the collection with 4 replicas. In the collection with 4 replicas, you 
could have 2 replicas on us-east-1a and 2 replicas on us-east-1b. What we 
really want is 1 on each before having the 4th replica on another zone...

In reality that is what happens. it starts allotting one at a time and you end 
up with 1 on each zone and another one ends up in a random zone.

But the problem is that once you are already in a badly distributed cluster, it 
won't show any violations.

Once we are done with SOLR-12511, that ceases to be a problem. your rules will 
look like 
{code}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

this means the effective policy for a shard with 4 replicas is 
{code}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

This means that any zone with 0 replicas is a violation. 


was (Author: noble.paul):
bq. for the collection with 4 replicas. In the collection with 4 replicas, you 
could have 2 replicas on us-east-1a and 2 replicas on us-east-1b. What we 
really want is 1 on each before having the 4th replica on another zone...

In reality that is what happens. it starts allotting one at a time and you end 
up with 1 on each zone and another one ends up in a random zone.

Once we are done with SOLR-12511, that ceases to be a problem. your rules will 
look like 
{code}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

this means the effective policy for a shard with 4 replicas is 
{code}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

This means that any zone with 0 replicas is a violation. 

> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4692 - Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4692/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
Didn't see all replicas for shard shard1 in repfacttest_c8n_1x3 come up within 
3 ms! ClusterState: {   "control_collection":{ "pullReplicas":"0", 
"replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node2":{ 
"core":"control_collection_shard1_replica_n1", 
"base_url":"http://127.0.0.1:59298";, 
"node_name":"127.0.0.1:59298_", "state":"active", 
"type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"},   
"repfacttest_c8n_1x3":{ "pullReplicas":"0", "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node4":{ 
"core":"repfacttest_c8n_1x3_shard1_replica_n1", 
"base_url":"http://127.0.0.1:59337";, 
"node_name":"127.0.0.1:59337_", "state":"recovering", 
"type":"NRT"},   "core_node5":{ 
"core":"repfacttest_c8n_1x3_shard1_replica_n2", 
"base_url":"http://127.0.0.1:59320";, 
"node_name":"127.0.0.1:59320_", "state":"active", 
"type":"NRT", "leader":"true"},   "core_node6":{
 "core":"repfacttest_c8n_1x3_shard1_replica_n3", 
"base_url":"http://127.0.0.1:59298";, 
"node_name":"127.0.0.1:59298_", "state":"recovering", 
"type":"NRT", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"3",   
  "tlogReplicas":"0"},   "collection1":{ "pullReplicas":"0", 
"replicationFactor":"1", "shards":{   "shard1":{ 
"range":"8000-d554", "state":"active", 
"replicas":{"core_node66":{ 
"core":"collection1_shard1_replica_n65", 
"base_url":"http://127.0.0.1:59337";, 
"node_name":"127.0.0.1:59337_", "state":"active", 
"type":"NRT", "leader":"true"}}},   "shard2":{ 
"range":"d555-2aa9", "state":"active", 
"replicas":{"core_node62":{ 
"core":"collection1_shard2_replica_n61", 
"base_url":"http://127.0.0.1:59320";, 
"node_name":"127.0.0.1:59320_", "state":"active", 
"type":"NRT", "leader":"true"}}},   "shard3":{ 
"range":"2aaa-7fff", "state":"active", 
"replicas":{"core_node64":{ 
"core":"collection1_shard3_replica_n63", 
"base_url":"http://127.0.0.1:59329";, 
"node_name":"127.0.0.1:59329_", "state":"active", 
"type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in 
repfacttest_c8n_1x3 come up within 3 ms! ClusterState: {
  "control_collection":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node2":{
"core":"control_collection_shard1_replica_n1",
"base_url":"http://127.0.0.1:59298";,
"node_name":"127.0.0.1:59298_",
"state":"active",
"type":"NRT",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"0"},
  "repfacttest_c8n_1x3":{
"pullReplicas":"0",
"replicationFactor":"3",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node4":{
"core":"repfacttest_c8n_1x3_shard1_replica_n1",
"base_url":"http://127.0.0.1:59337";,
"node_name":"127.0.0.1:59337_",
"state":"recovering",
"type":"NRT"},
  "core_node5":{
"core":"repfacttest_c8n_1x3_shard1_replica_n2",
"base_url":"http://127.0.0.1:59320";,
"node_name":"127.0.0.1:59320_",
"state":"active",
"type":"NRT",
"leader":"true"},
  "core_node6":{
"core":"repfacttest_c8n_1x3_shard1_replica_n3",
"base_url":"http://127.0.0.1:59298";,
"node_name":"127.0.0.1:59298_",
"state":"recovering",
"type":"NRT",

Re: Welcome Nhat Nguyen as Lucene/Solr committer

2018-06-22 Thread Nhat Nguyen
Hello,

Thank you for all warm welcomes :). I am happy and excited to join the
force as a committer.

I was born and raised in Saigon, Vietnam. Five years ago, I moved to
Montreal, Canada to study. It has been a great chance for me to join
Elastic last year. Here I work in Elasticsearch mainly focusing on the
distributed area.

I look forward to working and meeting you in tickets and in person.
Again, thank you for the invitation!

Cheers,
Nhat

On Fri, Jun 22, 2018 at 8:28 PM Michael McCandless <
luc...@mikemccandless.com> wrote:

> Welcome Nhat!
>
> Mike
>
> On Mon, Jun 18, 2018, 4:42 PM Adrien Grand  wrote:
>
>> Hi all,
>>
>> Please join me in welcoming Nhat Nguyen as the latest Lucene/Solr
>> committer.
>> Nhat, it's tradition for you to introduce yourself with a brief bio.
>>
>> Congratulations and Welcome!
>>
>> Adrien
>>
>


[JENKINS] Lucene-Solr-repro - Build # 864 - Unstable

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/864/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/14/consoleText

[repro] Revision: 3b0edb0d667dbfa8c8ffb6c836a68a6f07effc00

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=9B677342BB4945D1 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ko -Dtests.timezone=AET -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=9B677342BB4945D1 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ko -Dtests.timezone=AET -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=9B677342BB4945D1 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=bg-BG -Dtests.timezone=Asia/Hebron -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestStressCloudBlindAtomicUpdates 
-Dtests.method=test_stored_idx -Dtests.seed=9B677342BB4945D1 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-KW -Dtests.timezone=Etc/GMT+6 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCloudPivotFacet 
-Dtests.method=test -Dtests.seed=9B677342BB4945D1 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ms-MY -Dtests.timezone=Asia/Aden -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCloudPivotFacet 
-Dtests.seed=9B677342BB4945D1 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ms-MY -Dtests.timezone=Asia/Aden -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
25e7631b9014a5d0729be7926313c498df1dc606
[repro] git fetch
[repro] git checkout 3b0edb0d667dbfa8c8ffb6c836a68a6f07effc00

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestCloudPivotFacet
[repro]   IndexSizeTriggerTest
[repro]   TestTriggerIntegration
[repro]   TestStressCloudBlindAtomicUpdates
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestCloudPivotFacet|*.IndexSizeTriggerTest|*.TestTriggerIntegration|*.TestStressCloudBlindAtomicUpdates"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=9B677342BB4945D1 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ms-MY -Dtests.timezone=Asia/Aden -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 1112111 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestCloudPivotFacet
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] an

[JENKINS] Lucene-Solr-NightlyTests-7.4 - Build # 11 - Failure

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.4/11/

3 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.moveReplicaTest

Error Message:
Collection not found: movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([A14C3FE1E5B707C1:A66E877BB22DB683]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.moveReplicaTest(HdfsCollectionsAPIDistributedZkTest.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22300 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22300/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 null Live Nodes: [127.0.0.1:34223_solr, 
127.0.0.1:45039_solr] Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/21)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node3/data/",
   "base_url":"http://127.0.0.1:34223/solr";,   
"node_name":"127.0.0.1:34223_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node5":{   
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node5/data/",
   "base_url":"http://127.0.0.1:39879/solr";,   
"node_name":"127.0.0.1:39879_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"down"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node6":{   
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node6/data/",
   "base_url":"http://127.0.0.1:34223/solr";,   
"node_name":"127.0.0.1:34223_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node6/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node8":{   
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node8/data/",
   "base_url":"http://127.0.0.1:39879/solr";,   
"node_name":"127.0.0.1:39879_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node8/data/tlog",
   "core":"testSimple2_shard2_replica_n7",   
"shared_storage":"true",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple2
null
Live Nodes: [127.0.0.1:34223_solr, 127.0.0.1:45039_solr]
Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/21)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node3/data/",
  "base_url":"http://127.0.0.1:34223/solr";,
  "node_name":"127.0.0.1:34223_solr",
  "type":"NRT",
  "force_set_state":"false",
  
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node3/data/tlog",
  "core":"testSimple2_shard1_replica_n1",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"},
"core_node5":{
  
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node5/data/",
  "base_url":"http://127.0.0.1:39879/solr";,
  "node_name":"127.0.0.1:39879_solr",
  "type":"NRT",
  "force_set_state":"false",
  
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node5/data/tlog",
  "core":"testSimple2_shard1_replica_n2",
  "shared_storage":"true",
  "state":"down"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node6":{
  
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node6/data/",
  "base_url":"http://127.0.0.1:34223/solr";,
  "node_name":"127.0.0.1:34223_solr",
  "type":"NRT",
  "force_set_state":"false",
  
"ulogDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node6/data/tlog",
  "core":"testSimple2_shard2_replica_n4",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"},
"core_node8":{
  
"dataDir":"hdfs://localhost.localdomain:46029/data/testSimple2/core_node8/data/",
  "base_url":"http://127.0.0.1:39879/

[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-22 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520919#comment-16520919
 ] 

Noble Paul commented on SOLR-11985:
---

bq. for the collection with 4 replicas. In the collection with 4 replicas, you 
could have 2 replicas on us-east-1a and 2 replicas on us-east-1b. What we 
really want is 1 on each before having the 4th replica on another zone...

In reality that is what happens. it starts allotting one at a time and you end 
up with 1 on each zone and another one ends up in a random zone.

Once we are done with SOLR-12511, that ceases to be a problem. your rules will 
look like 
{code}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "33%", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

this means the effective policy for a shard with 4 replicas is 
{code}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "1.33", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

This means that any zone with 0 replicas is a violation. 

> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Nhat Nguyen as Lucene/Solr committer

2018-06-22 Thread Michael McCandless
Welcome Nhat!

Mike

On Mon, Jun 18, 2018, 4:42 PM Adrien Grand  wrote:

> Hi all,
>
> Please join me in welcoming Nhat Nguyen as the latest Lucene/Solr
> committer.
> Nhat, it's tradition for you to introduce yourself with a brief bio.
>
> Congratulations and Welcome!
>
> Adrien
>


[JENKINS] Lucene-Solr-7.4-Windows (32bit/jdk1.8.0_172) - Build # 11 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.4-Windows/11/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=3957

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=3957
at 
__randomizedtesting.SeedInfo.seed([952D9F10469680B:313EAAD490B9CA4D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=211300

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=211300
at 
__randomizedte

[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-06-22 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520908#comment-16520908
 ] 

Erick Erickson commented on SOLR-12008:
---

gah! Figured it out.

The code before changes uses the configuration in cloud-scripts which does not 
define any file appenders so they're not created. 

Pointing the call from run_tool through a common file that _does_ define a file 
appender creates them in server/logs since it doesn't have any other 
information to work with.

I think the cleanest thing is to provide a second log4j2 config file in 
server/resources with a name like log4j2-console.xml. Seems to work anyway.


> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-06-22 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520824#comment-16520824
 ] 

Erick Erickson commented on SOLR-12008:
---

I'm trying to untangle the bits about running examples and have run into this 
problem:

Whenever I run an example, I get log files in solr/server/logs. They're small 
but annoying. Once the code gets running, then it's fine, the ongoing logs go in

example/techproducts/logs

or

example/node1/logs
example/node2/logs
etc.

The first invocation of SolrCLI creates the bogus logs, in the example cases it 
comes back to bin/solr with the -s option which puts the logs in the right 
place.

I can't find a good way to just prevent log4j2 from doing _anything_. Absent 
that is there a real problem with having the extra log files around in the 
example cases?

Varun:

This may be related to your issue with eclipse, we'll see.

I'm also changing 
func launch_solr 
to 
func start_solr

to make it symmetrical.

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-22 Thread Karl Wright
Sorry, I just returned from an overseas trip.  I'll try to put some thought
into a cogent response when I get a little less scrambled.

Karl


On Fri, Jun 22, 2018 at 4:16 PM David Smiley 
wrote:

> Nick, are you not only arguing for spatial code to be in Lucene core, but
> also for the "spatial" module to continue to exist?  And I believe Adrien
> still wants some spatial stuff in sandbox so that means spatial code in 5
> modules.  Five modules... let that that sink in... wow.  Gosh that's kinda
> overwhelming IMO.
>
> Karl do you have any opinions about this stuff?  I don't know what your
> opinions are, come to think of it.
>
> ~ David
>
> On Wed, Jun 20, 2018 at 1:01 PM Nicholas Knize  wrote:
>
>> If I were to pick between the two, I also have a preference for B.  I've
>> also tried to keep this whole spatial organization rather simple:
>>
>> core - simple spatial capabilities needed by the 99% spatial use case
>> (e.g., web mapping). Includes LatLonPoint, polygon & distance search
>> (everything currently in sandbox). Lightweight, and no dependencies or
>> complexities. If one wants simple and fast point search, all you need is
>> the core module.
>>
>> spatial - dependency free. Expands on core spatial to include simple
>> shape searching. Uses internal relations. Everything confined to core and
>> spatial modules.
>>
>> spatial-extras - expanded spatial capabilities. Welcomes third-party
>> dependencies (e.g., S3, SIS, Proj4J). Targets more advanced/expert GIS
>> use-cases.
>>
>> geo3d - trades speed for accuracy. I've always struggled with the name,
>> since it implies 3D shapes/point cloud support. But history has shown
>> considering a name change to be a bike-shedding endeavor.
>>
>> At the end of the day I'm up for whatever makes most sense for everyone
>> here. Lord knows we could use more people helping out on geo.
>>
>> - Nick
>>
>>
>>
>> On Wed, Jun 20, 2018 at 11:40 AM Adrien Grand  wrote:
>>
>>> I have a slight preference for B similarly to how StandardAnalyzer is in
>>> core and other analyzers are in analysis, but no strong feelings. In any
>>> case I agree that both A and B would be much better than the current
>>> situation.
>>>
>>>
>>> Le mer. 20 juin 2018 à 18:09, David Smiley  a
>>> écrit :
>>>
 I think everyone agrees the current state of spatial code organization
 in Lucene is not desirable.  We have a spatial module that has almost
 nothing in it, we have mature spatial code in the sandbox that needs to
 "graduate" somewhere, and we've got a handful of geo utilities in Lucene
 core (mostly because I didn't notice).  No agreement has been reached on
 what the desired state should be.

 I'd like to hear opinions on this from members of the community.  I am
 especially interested in listening to people that normally don't seem to
 speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
 respect both of you guys a ton for your tenure with Lucene and aren't too
 pushy with your opinions. I can be convinced to change my mind, especially
 if coming from you two.  Of course anyone can respond -- this is an open
 discussion!

 As I understand it, there are two proposals loosely defined as follows:

 (A) Common spatial needs will be met in the "spatial" module.  The
 Lucene "spatial" module, currently in a weird gutted state, should have
 basically all spatial code currently in sandbox plus all geo stuff in
 Lucene core. Thus there will be no geo stuff in Lucene core.

 (B) Common spatial needs will be met by Lucene core.  Lucene core
 should expand it's current "geo" utilities to include the spatial stuff
 currently in the sandbox module.  It'd also take on what little remains in
 the Lucene spatial module and thus we can remove the spatial module.

 With either plan if a user has certain advanced/specialized needs they
 may need to go to spatial3d or spatial-extras modules.  These would be
 untouched in both proposals.

 I'm in favor of (A) on the grounds that we have modules for special
 feature areas, and spatial should be no different.  My gut estimation is
 that 75-90% of apps do not have spatial requirements and need not depend on
 any spatial module.  Other modules are probably used more (e.g. queries,
 suggest, etc.)

 Respectfully,
   ~ David

 p.s. if I mischaracterized any proposal or overlooked another then I'm
 sorry, please correct me.
 --
 Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
 LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
 http://www.solrenterprisesearchserver.com

>>> --
>> Nicholas Knize  |  Geospatial Software Guy  |  Elasticsearch & Apache
>> Lucene  |  nkn...@apache.org
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterp

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 648 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/648/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=8327700

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=8327700
at 
__randomizedtesting.SeedInfo.seed([D91AB440D940862C:E176C7654D90246A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16133 lines...]
   [junit4] Suite: org.apache.solr.common.util.TestTimeSource
   [junit4]   2> 171703 INFO  
(SUITE-TestTimeSource-seed#[D91AB440D940862C]-worker) [] 
o.

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22299 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22299/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=1434, 
name=cdcr-replicator-641-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1434, name=cdcr-replicator-641-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([179CC4BE3CA5082E]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.schema.TestBulkSchemaConcurrent.test

Error Message:
Captured an uncaught exception in thread: Thread[id=34281, name=Thread-7131, 
state=RUNNABLE, group=TGRP-TestBulkSchemaConcurrent]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=34281, name=Thread-7131, state=RUNNABLE, 
group=TGRP-TestBulkSchemaConcurrent]
at 
__randomizedtesting.SeedInfo.seed([179CC4BE3CA5082E:9FC8FB64925965D6]:0)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at __randomizedtesting.SeedInfo.seed([179CC4BE3CA5082E]:0)
at java.base/java.util.ArrayList.add(ArrayList.java:468)
at java.base/java.util.ArrayList.add(ArrayList.java:480)
at 
org.apache.solr.schema.TestBulkSchemaConcurrent$1.run(TestBulkSchemaConcurrent.java:71)




Build Log:
[...truncated 1869 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20180622_193122_97517226885698389146925.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20180622_193122_9757075924846183082882.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20180622_193122_975442950508919956658.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 296 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20180622_193952_319368648694488547.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20180622_193952_3192405921133633563316.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20180622_193952_33712965927917989661025.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...tr

Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-22 Thread David Smiley
Nick, are you not only arguing for spatial code to be in Lucene core, but
also for the "spatial" module to continue to exist?  And I believe Adrien
still wants some spatial stuff in sandbox so that means spatial code in 5
modules.  Five modules... let that that sink in... wow.  Gosh that's kinda
overwhelming IMO.

Karl do you have any opinions about this stuff?  I don't know what your
opinions are, come to think of it.

~ David

On Wed, Jun 20, 2018 at 1:01 PM Nicholas Knize  wrote:

> If I were to pick between the two, I also have a preference for B.  I've
> also tried to keep this whole spatial organization rather simple:
>
> core - simple spatial capabilities needed by the 99% spatial use case
> (e.g., web mapping). Includes LatLonPoint, polygon & distance search
> (everything currently in sandbox). Lightweight, and no dependencies or
> complexities. If one wants simple and fast point search, all you need is
> the core module.
>
> spatial - dependency free. Expands on core spatial to include simple shape
> searching. Uses internal relations. Everything confined to core and spatial
> modules.
>
> spatial-extras - expanded spatial capabilities. Welcomes third-party
> dependencies (e.g., S3, SIS, Proj4J). Targets more advanced/expert GIS
> use-cases.
>
> geo3d - trades speed for accuracy. I've always struggled with the name,
> since it implies 3D shapes/point cloud support. But history has shown
> considering a name change to be a bike-shedding endeavor.
>
> At the end of the day I'm up for whatever makes most sense for everyone
> here. Lord knows we could use more people helping out on geo.
>
> - Nick
>
>
>
> On Wed, Jun 20, 2018 at 11:40 AM Adrien Grand  wrote:
>
>> I have a slight preference for B similarly to how StandardAnalyzer is in
>> core and other analyzers are in analysis, but no strong feelings. In any
>> case I agree that both A and B would be much better than the current
>> situation.
>>
>>
>> Le mer. 20 juin 2018 à 18:09, David Smiley  a
>> écrit :
>>
>>> I think everyone agrees the current state of spatial code organization
>>> in Lucene is not desirable.  We have a spatial module that has almost
>>> nothing in it, we have mature spatial code in the sandbox that needs to
>>> "graduate" somewhere, and we've got a handful of geo utilities in Lucene
>>> core (mostly because I didn't notice).  No agreement has been reached on
>>> what the desired state should be.
>>>
>>> I'd like to hear opinions on this from members of the community.  I am
>>> especially interested in listening to people that normally don't seem to
>>> speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
>>> respect both of you guys a ton for your tenure with Lucene and aren't too
>>> pushy with your opinions. I can be convinced to change my mind, especially
>>> if coming from you two.  Of course anyone can respond -- this is an open
>>> discussion!
>>>
>>> As I understand it, there are two proposals loosely defined as follows:
>>>
>>> (A) Common spatial needs will be met in the "spatial" module.  The
>>> Lucene "spatial" module, currently in a weird gutted state, should have
>>> basically all spatial code currently in sandbox plus all geo stuff in
>>> Lucene core. Thus there will be no geo stuff in Lucene core.
>>>
>>> (B) Common spatial needs will be met by Lucene core.  Lucene core should
>>> expand it's current "geo" utilities to include the spatial stuff currently
>>> in the sandbox module.  It'd also take on what little remains in the Lucene
>>> spatial module and thus we can remove the spatial module.
>>>
>>> With either plan if a user has certain advanced/specialized needs they
>>> may need to go to spatial3d or spatial-extras modules.  These would be
>>> untouched in both proposals.
>>>
>>> I'm in favor of (A) on the grounds that we have modules for special
>>> feature areas, and spatial should be no different.  My gut estimation is
>>> that 75-90% of apps do not have spatial requirements and need not depend on
>>> any spatial module.  Other modules are probably used more (e.g. queries,
>>> suggest, etc.)
>>>
>>> Respectfully,
>>>   ~ David
>>>
>>> p.s. if I mischaracterized any proposal or overlooked another then I'm
>>> sorry, please correct me.
>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> http://www.solrenterprisesearchserver.com
>>>
>> --
> Nicholas Knize  |  Geospatial Software Guy  |  Elasticsearch & Apache
> Lucene  |  nkn...@apache.org
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7374 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7374/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.core.TestCodecSupport.testMixedCompressionMode

Error Message:
Expecting compression mode string to be BEST_SPEED but got: BEST_COMPRESSION  
SegmentInfo: _1(8.0.0):C1  SegmentInfos: segments_6: _3(8.0.0):c2 _1(8.0.0):C1  
Codec: Lucene70 expected: but was:

Stack Trace:
org.junit.ComparisonFailure: Expecting compression mode string to be BEST_SPEED 
but got: BEST_COMPRESSION
 SegmentInfo: _1(8.0.0):C1
 SegmentInfos: segments_6: _3(8.0.0):c2 _1(8.0.0):C1
 Codec: Lucene70 expected: but was:
at 
__randomizedtesting.SeedInfo.seed([B88E7E11EA1B7FD:D5FD90855BC3424C]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.core.TestCodecSupport.lambda$assertCompressionMode$0(TestCodecSupport.java:115)
at org.apache.solr.core.SolrCore.withSearcher(SolrCore.java:1874)
at 
org.apache.solr.core.TestCodecSupport.assertCompressionMode(TestCodecSupport.java:112)
at 
org.apache.solr.core.TestCodecSupport.testMixedCompressionMode(TestCodecSupport.java:157)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.random

[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-22 Thread Jerry Bao (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520669#comment-16520669
 ] 

Jerry Bao commented on SOLR-11985:
--

Given the way it was written, the concern I had was the following:

One collection has shards with 3 replicas and another collection has shards 
with 4 replicas. If I had the following set of rules...
{code}
{"replica" : "<33%", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "<33%", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "<33%", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}

My concern was it would turn into
{code}
{"replica" : "<2", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "<2", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "<2", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}
for the collection with 3 replicas, and
{code}
{"replica" : "<3", "shard" : "#EACH", "sysprop:region": "us-east-1a"}
{"replica" : "<3", "shard" : "#EACH", "sysprop:region": "us-east-1b"}
{"replica" : "<3", "shard" : "#EACH", "sysprop:region": "us-east-1c"}
{code}
for the collection with 4 replicas. In the collection with 4 replicas, you 
could have 2 replicas on us-east-1a and 2 replicas on us-east-1b. What we 
really want is 1 on each before having the 4th replica on another zone. Due to 
the way the rules are set up, it treats them individually when they should be 
treated together; evenly balancing the replicas based on the number of zones 
available.

We could make it work by making different zone rules per collection, but that 
shouldn't be necessary. Rack awareness (which is what we're trying to achieve 
here), should be collection agnostic and apply against each collection. 
https://issues.apache.org/jira/browse/SOLR-12511 would help here.

> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12495) Enhance the Autoscaling policy syntax to evenly distribute replicas

2018-06-22 Thread Jerry Bao (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520654#comment-16520654
 ] 

Jerry Bao commented on SOLR-12495:
--

{quote}
Actually, the terms replica , shard are always associated with a collection. If 
the attribute shard is present , the replica counts are computed on a per-shard 
basis , if it is absent, it is computed on a per-collection basis

The equivalent term for a replica globally is a core which is not associated 
with a collection or shard
{quote}
I see; could {"core": "#MINIMUM", "node": "#ANY"} be included with this issue? 
Along with per-collection balancing, we'll also need cluster-wide balancing.

{quote}
That means The no:of of replicas will have to be between 1 and 2 (inclusive) . 
Which means , both 1 and 2 are valid but 0 , 3 or >3 are invalid and , the list 
of violations will show that
{quote}
Awesome! No qualms here then :)

Thanks for all your help on this issue! Cluster balancing is a critical issue 
for us @ Reddit.

> Enhance the Autoscaling policy syntax to evenly distribute replicas
> ---
>
> Key: SOLR-12495
> URL: https://issues.apache.org/jira/browse/SOLR-12495
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> Support a new function value for {{replica= "#MINIMUM"}}
> {{#MINIMUM}} means the minimum computed value for the given configuration
> the value of replica will be calculated as  {{<= 
> Math.ceil(number_of_replicas/number_of_valid_nodes) }}
> *example 1:*
> {code:java}
> {"replica" : "#MINIMUM" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *case 1* : nodes=3, replicationFactor=4
>  the value of replica will be calculated as {{Math.ceil(4/3) = 2}}
> current state : nodes=3, replicationFactor=2
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *case 2* : 
> current state : nodes=3, replicationFactor=2
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *example:2*
> {code}
> {"replica" : "#MINIMUM"  , "node" : "#ANY"}{code}
> case 1: numShards = 2, replicationFactor=3, nodes = 5
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "node" : "#ANY"}
> {code}
> *example:3*
> {code}
> {"replica" : "<2"  , "shard" : "#EACH" , "port" : "8983"}{code}
> case 1: {{replicationFactor=3, nodes with port 8983 = 2}}
> this is equivalent to the hard coded rule
> {code}
> {"replica" : "<3"  , "shard" : "#EACH" , "port" : "8983"}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 681 - Still Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/681/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeLost/127.0.0.1:58805_solr exists

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeLost/127.0.0.1:58805_solr exists
at 
__randomizedtesting.SeedInfo.seed([BACD74715C58ED1F:A277FC7D526D20F0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration(NodeMarkersRegistrationTest.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12934 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest
 

Re: VOTE: Release Apache Solr Ref Guide for 7.4

2018-06-22 Thread kshitij tyagi
+1

On Fri, Jun 22, 2018 at 8:10 PM, Cassandra Targett 
wrote:

> Please vote to release the Solr Ref Guide for 7.4.
>
> The PDF artifacts can be downloaded from: https://dist.apache.org/
> repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-7.4-RC1/
>
> $ cat apache-solr-ref-guide-7.4.pdf.sha1
> 1c09d23c6e4c470a6298dbba20684b81da683a8b  apache-solr-ref-guide-7.4.pdf
>
> The PDF is up to 1258 pages.
>
> The online version is available at: http://lucene.apache.org/
> solr/guide/7_4/
>
> Here's my +1.
>
> Cassandra
>


[jira] [Commented] (LUCENE-8367) Make per-dimension drill down optional for each facet dimension

2018-06-22 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520557#comment-16520557
 ] 

Lucene/Solr QA commented on LUCENE-8367:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 29m 
43s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} facet in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8367 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928696/LUCENE-8367.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 3b9d3a7 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/38/testReport/ |
| modules | C: lucene/core lucene/facet U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/38/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Make per-dimension drill down optional for each facet dimension
> ---
>
> Key: LUCENE-8367
> URL: https://issues.apache.org/jira/browse/LUCENE-8367
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Attachments: LUCENE-8367.patch
>
>
> Today, when you index a {{FacetField}} with path {{foo/bar,}} we index two 
> drill down terms onto the document: {{foo}} and {{foo/bar}}.
> But I suspect some users (like me!) don't need to drilldown just on {{foo}} 
> (effectively "find all documents that have any value for this facet 
> dimension"), so I added an option to {{FacetsConfig}} to let you specify 
> per-dimension whether you need to drill down (defaults to true, matching 
> current behavior).
> I also added {{hashCode}} and {{equals}} to the {{LongRange}} and 
> {{DoubleRange}} classes in facets module, and improved {{CheckIndex}} a bit 
> to print the total %deletions across the index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1926 - Still unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1926/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=484, 
name=cdcr-replicator-134-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=484, name=cdcr-replicator-134-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([B393A2B45D95D9F1]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=34451, 
name=cdcr-replicator-14044-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=34451, name=cdcr-replicator-14044-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([B393A2B45D95D9F1]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:105)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14529 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 3045811 INFO  
(SUITE-CdcrBidirectionalTest-seed#[B393A2B45D95D9F1]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_B393A2B45D95D9F1-001/init-core-data-001
   [junit4]   2> 3045811 INFO  
(SUITE-CdcrBidirectionalTest-seed#[B393A2B45D95D9F1]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 3045812 INFO  
(SUITE-CdcrBidirectionalTest-seed#[B393A2B45D95D9F1]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 3045815 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[B393A2B45D95D9F1]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 3045815 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[B393A2B45D95D9F1]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_B393A2B45D95D9F1-001/cdcr-cluster2-001
   [junit4]   2> 3045815 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[B393A2B45D95D9F1]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3045815 INFO  (Thread-7969) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3045816 INFO  (Thread-7969) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3045818 ERROR (Thread-7969) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 3045916 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[B393A2B45D95D9F1]) [] 
o.a.s.c.ZkTestServer start zk server on port:40121
   [junit4]   2> 3045918 INFO  (zkConnectionManagerCallback-9097-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3045926 INFO  (jetty-launcher-9094-thread-1) [] 
o.e.j.s.Server jetty-9.4.10.v20180503; built: 2018-05-03T15:56:21.710Z; git: 
daa

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22298 - Unstable!

2018-06-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22298/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
Timed out waiting for replica core_node52 (1529681195850) to replicate from 
leader core_node44 (0)

Stack Trace:
java.lang.AssertionError: Timed out waiting for replica core_node52 
(1529681195850) to replicate from leader core_node44 (0)
at 
__randomizedtesting.SeedInfo.seed([809835CCA1EF3CC3:8CC0A160F13513B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForReplicationFromReplicas(AbstractFullDistribZkTestBase.java:2146)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:211)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carr

[JENKINS] Lucene-Solr-Tests-master - Build # 2571 - Unstable

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2571/

4 tests failed.
FAILED:  
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted

Error Message:
Collection not found: movereplicatest_coll3

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: 
movereplicatest_coll3
at 
__randomizedtesting.SeedInfo.seed([7F1893A97433FCA:486F6DE7964970A8]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.addDocs(MoveReplicaHDFSFailoverTest.java:203)
at 
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted(MoveReplicaHDFSFailoverTest.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:

[jira] [Commented] (LUCENE-7314) Graduate InetAddressPoint and LatLonPoint to core

2018-06-22 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520482#comment-16520482
 ] 

Adrien Grand commented on LUCENE-7314:
--

+1 in general, but maybe we should keep LatLonPoint#nearest in sandbox (in its 
own class for instance) given that it relies on implementation details of the 
current codec.

> Graduate InetAddressPoint and LatLonPoint to core
> -
>
> Key: LUCENE-7314
> URL: https://issues.apache.org/jira/browse/LUCENE-7314
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7314.patch
>
>
> Maybe we should graduate these fields (and related queries) to core for 
> Lucene 6.1?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



VOTE: Release Apache Solr Ref Guide for 7.4

2018-06-22 Thread Cassandra Targett
Please vote to release the Solr Ref Guide for 7.4.

The PDF artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-7.4-RC1/

$ cat apache-solr-ref-guide-7.4.pdf.sha1
1c09d23c6e4c470a6298dbba20684b81da683a8b  apache-solr-ref-guide-7.4.pdf

The PDF is up to 1258 pages.

The online version is available at: http://lucene.apache.org/solr/guide/7_4/

Here's my +1.

Cassandra


Re: Lucene/Solr 7.4

2018-06-22 Thread Adrien Grand
Everything is ready but I need to wait for the mirrors to replicate so my
current plan is to announce on Monday.

Le ven. 22 juin 2018 à 16:35, Cassandra Targett  a
écrit :

> Thanks Adrien.
>
> You can go ahead and announce the release if you're ready - it will be a
> few days before the vote passes and we don't have to wait IMO.
>
> I think we're ready to merge the release processes but IIRC there are a
> few "plumbing" things we still need to do. I forget exactly what's left,
> but I'll find out and maybe we can try to do it for 7.5.
>
> On Fri, Jun 22, 2018 at 9:14 AM Adrien Grand  wrote:
>
>> Hi Cassandra,
>>
>> No worries. Would you like me to delay the announce a bit so that the ref
>> guide can be announced at the same time? Should we move the ref guide as
>> part of the release process or is there a good reason to keep it separate?
>>
>> Le ven. 22 juin 2018 à 15:48, Cassandra Targett 
>> a écrit :
>>
>>> I'm sorry I wasn't able to initiate the vote for the Ref Guide earlier
>>> this week as I'd intended - I don't like to do the artifact upload if I
>>> think I'll be interrupted before finishing all the steps, and traveling
>>> this week introduced more interruptions than I'd anticipated in the first
>>> part of the week.
>>>
>>> I'm uploading artifacts this morning, so will start the vote today.
>>>
>>> On Fri, Jun 22, 2018 at 7:54 AM Adrien Grand  wrote:
>>>
 Note that one important change compared to previous releases is that we
 are now pointing users to https://lucene.apache.org/core/downloads and
 http://lucene.apache.org/solr/downloads for downloads, which have been
 updated in order to pass the ASF requirements for download pages[1]. Please
 let me know if you notice anything wrong with these pages.

 [1] https://www.apache.org/dev/release-download-pages

 Le ven. 22 juin 2018 à 11:31, Adrien Grand  a
 écrit :

> I wrote some release notes for Lucene[1] and just a skeleton for
> Solr[2]. Can someone help me with the Solr release notes? Also feel free 
> to
> add items to the Lucene release notes if you think that they are
> release-notes-worthy. Thanks!
>
> [1] https://wiki.apache.org/lucene-java/ReleaseNote74
> [2] https://wiki.apache.org/solr/ReleaseNote74
>
> Le lun. 18 juin 2018 à 22:42, Cassandra Targett 
> a écrit :
>
>> Re the Ref Guide changes: it's true the source artifacts will miss
>> these. That's been true for nearly all of the releases since we moved to
>> this model, however.
>>
>> It seems a hard choice to skip documenting parameters when there is
>> still actually time to get them into the published artifacts of the
>> documentation. I of course understand the need for the source artifacts 
>> to
>> be correct, my point is only that we haven't been strict about it for the
>> past year.
>>
>> I'm flying today, so won't be able to build the Ref Guide RC until
>> late tonight/tomorrow morning, so there's time Anshum if you want to
>> backport to branch_7_4 & others are also OK with it despite the 7.4 RC
>> being available.
>>
>> However, your commit has a typo:
>>
>> "...the intention is to restric the size..."
>>
>> Should be "restrict" instead.
>>
>> Cassandra
>>
>> On Mon, Jun 18, 2018 at 3:24 PM Adrien Grand 
>> wrote:
>>
>>> Since artifacts are available, I'll start a vote. I'm happy to
>>> respin if we decide to.
>>>
>>> Le lun. 18 juin 2018 à 19:52, Uwe Schindler  a
>>> écrit :
>>>
 Hi Anshum,



 I was talking about **source** artifacts. Those will miss the
 commit, because it’s a tar.gz of whole source tree 
 (lucene+solr+refguide)!



 Uwe



 -

 Uwe Schindler

 Achterdiek 19, D-28357 Bremen
 

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 *From:* Anshum Gupta 
 *Sent:* Monday, June 18, 2018 7:35 PM
 *To:* dev@lucene.apache.org
 *Subject:* Re: Lucene/Solr 7.4



 The release binaries would miss the commit, but I don't think that
 we package the ref guide with they binaries so it should be ok.



 I'll push this to master for now and wait for Cassandra to confirm
 (or if someone else knows).



 On Mon, Jun 18, 2018 at 10:18 AM Uwe Schindler 
 wrote:

 I think the source artifacts may miss the commit then. But that's
 not urgent, isn't it?



 Uwe



 Am June 18, 2018 5:09:02 PM UTC schrieb Adrien Grand <

Re: Lucene/Solr 7.4

2018-06-22 Thread Cassandra Targett
Thanks Adrien.

You can go ahead and announce the release if you're ready - it will be a
few days before the vote passes and we don't have to wait IMO.

I think we're ready to merge the release processes but IIRC there are a few
"plumbing" things we still need to do. I forget exactly what's left, but
I'll find out and maybe we can try to do it for 7.5.

On Fri, Jun 22, 2018 at 9:14 AM Adrien Grand  wrote:

> Hi Cassandra,
>
> No worries. Would you like me to delay the announce a bit so that the ref
> guide can be announced at the same time? Should we move the ref guide as
> part of the release process or is there a good reason to keep it separate?
>
> Le ven. 22 juin 2018 à 15:48, Cassandra Targett  a
> écrit :
>
>> I'm sorry I wasn't able to initiate the vote for the Ref Guide earlier
>> this week as I'd intended - I don't like to do the artifact upload if I
>> think I'll be interrupted before finishing all the steps, and traveling
>> this week introduced more interruptions than I'd anticipated in the first
>> part of the week.
>>
>> I'm uploading artifacts this morning, so will start the vote today.
>>
>> On Fri, Jun 22, 2018 at 7:54 AM Adrien Grand  wrote:
>>
>>> Note that one important change compared to previous releases is that we
>>> are now pointing users to https://lucene.apache.org/core/downloads and
>>> http://lucene.apache.org/solr/downloads for downloads, which have been
>>> updated in order to pass the ASF requirements for download pages[1]. Please
>>> let me know if you notice anything wrong with these pages.
>>>
>>> [1] https://www.apache.org/dev/release-download-pages
>>>
>>> Le ven. 22 juin 2018 à 11:31, Adrien Grand  a écrit :
>>>
 I wrote some release notes for Lucene[1] and just a skeleton for
 Solr[2]. Can someone help me with the Solr release notes? Also feel free to
 add items to the Lucene release notes if you think that they are
 release-notes-worthy. Thanks!

 [1] https://wiki.apache.org/lucene-java/ReleaseNote74
 [2] https://wiki.apache.org/solr/ReleaseNote74

 Le lun. 18 juin 2018 à 22:42, Cassandra Targett 
 a écrit :

> Re the Ref Guide changes: it's true the source artifacts will miss
> these. That's been true for nearly all of the releases since we moved to
> this model, however.
>
> It seems a hard choice to skip documenting parameters when there is
> still actually time to get them into the published artifacts of the
> documentation. I of course understand the need for the source artifacts to
> be correct, my point is only that we haven't been strict about it for the
> past year.
>
> I'm flying today, so won't be able to build the Ref Guide RC until
> late tonight/tomorrow morning, so there's time Anshum if you want to
> backport to branch_7_4 & others are also OK with it despite the 7.4 RC
> being available.
>
> However, your commit has a typo:
>
> "...the intention is to restric the size..."
>
> Should be "restrict" instead.
>
> Cassandra
>
> On Mon, Jun 18, 2018 at 3:24 PM Adrien Grand 
> wrote:
>
>> Since artifacts are available, I'll start a vote. I'm happy to respin
>> if we decide to.
>>
>> Le lun. 18 juin 2018 à 19:52, Uwe Schindler  a
>> écrit :
>>
>>> Hi Anshum,
>>>
>>>
>>>
>>> I was talking about **source** artifacts. Those will miss the
>>> commit, because it’s a tar.gz of whole source tree 
>>> (lucene+solr+refguide)!
>>>
>>>
>>>
>>> Uwe
>>>
>>>
>>>
>>> -
>>>
>>> Uwe Schindler
>>>
>>> Achterdiek 19, D-28357 Bremen
>>> 
>>>
>>> http://www.thetaphi.de
>>>
>>> eMail: u...@thetaphi.de
>>>
>>>
>>>
>>> *From:* Anshum Gupta 
>>> *Sent:* Monday, June 18, 2018 7:35 PM
>>> *To:* dev@lucene.apache.org
>>> *Subject:* Re: Lucene/Solr 7.4
>>>
>>>
>>>
>>> The release binaries would miss the commit, but I don't think that
>>> we package the ref guide with they binaries so it should be ok.
>>>
>>>
>>>
>>> I'll push this to master for now and wait for Cassandra to confirm
>>> (or if someone else knows).
>>>
>>>
>>>
>>> On Mon, Jun 18, 2018 at 10:18 AM Uwe Schindler 
>>> wrote:
>>>
>>> I think the source artifacts may miss the commit then. But that's
>>> not urgent, isn't it?
>>>
>>>
>>>
>>> Uwe
>>>
>>>
>>>
>>> Am June 18, 2018 5:09:02 PM UTC schrieb Adrien Grand <
>>> jpou...@gmail.com>:
>>>
>>> Hi Anshum,
>>>
>>> I am in the process of uploading artifacts that passed precommit
>>> locally. Your changes seem to be only about the reference guide, which I
>>> think is built separately? I think that means that I could proceed with 
>>> the
>>> current artifacts that I ha

Re: Lucene/Solr 7.4

2018-06-22 Thread Adrien Grand
Thanks David.

Le ven. 22 juin 2018 à 16:17, David Smiley  a
écrit :

> On Fri, Jun 22, 2018 at 5:31 AM Adrien Grand  wrote:
>
>> I wrote some release notes for Lucene[1] and just a skeleton for Solr[2].
>> Can someone help me with the Solr release notes? Also feel free to add
>> items to the Lucene release notes if you think that they are
>> release-notes-worthy. Thanks!
>>
>> [1] https://wiki.apache.org/lucene-java/ReleaseNote74
>> [2] https://wiki.apache.org/solr/ReleaseNote74
>>
>
> I took a stab at the Solr release highlights.  I was fairly conservative
> in which things to mention.  I was tempted to make statements about
> performance enhancements in various areas as my sense is this is going to
> be observed and appreciated but I dunno; perhaps every release someone
> would think that depending on what they worked on, so I'm biased and chose
> not to say anything.
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


Re: Lucene/Solr 7.4

2018-06-22 Thread David Smiley
On Fri, Jun 22, 2018 at 5:31 AM Adrien Grand  wrote:

> I wrote some release notes for Lucene[1] and just a skeleton for Solr[2].
> Can someone help me with the Solr release notes? Also feel free to add
> items to the Lucene release notes if you think that they are
> release-notes-worthy. Thanks!
>
> [1] https://wiki.apache.org/lucene-java/ReleaseNote74
> [2] https://wiki.apache.org/solr/ReleaseNote74
>

I took a stab at the Solr release highlights.  I was fairly conservative in
which things to mention.  I was tempted to make statements about
performance enhancements in various areas as my sense is this is going to
be observed and appreciated but I dunno; perhaps every release someone
would think that depending on what they worked on, so I'm biased and chose
not to say anything.
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: Lucene/Solr 7.4

2018-06-22 Thread Adrien Grand
Hi Cassandra,

No worries. Would you like me to delay the announce a bit so that the ref
guide can be announced at the same time? Should we move the ref guide as
part of the release process or is there a good reason to keep it separate?

Le ven. 22 juin 2018 à 15:48, Cassandra Targett  a
écrit :

> I'm sorry I wasn't able to initiate the vote for the Ref Guide earlier
> this week as I'd intended - I don't like to do the artifact upload if I
> think I'll be interrupted before finishing all the steps, and traveling
> this week introduced more interruptions than I'd anticipated in the first
> part of the week.
>
> I'm uploading artifacts this morning, so will start the vote today.
>
> On Fri, Jun 22, 2018 at 7:54 AM Adrien Grand  wrote:
>
>> Note that one important change compared to previous releases is that we
>> are now pointing users to https://lucene.apache.org/core/downloads and
>> http://lucene.apache.org/solr/downloads for downloads, which have been
>> updated in order to pass the ASF requirements for download pages[1]. Please
>> let me know if you notice anything wrong with these pages.
>>
>> [1] https://www.apache.org/dev/release-download-pages
>>
>> Le ven. 22 juin 2018 à 11:31, Adrien Grand  a écrit :
>>
>>> I wrote some release notes for Lucene[1] and just a skeleton for
>>> Solr[2]. Can someone help me with the Solr release notes? Also feel free to
>>> add items to the Lucene release notes if you think that they are
>>> release-notes-worthy. Thanks!
>>>
>>> [1] https://wiki.apache.org/lucene-java/ReleaseNote74
>>> [2] https://wiki.apache.org/solr/ReleaseNote74
>>>
>>> Le lun. 18 juin 2018 à 22:42, Cassandra Targett 
>>> a écrit :
>>>
 Re the Ref Guide changes: it's true the source artifacts will miss
 these. That's been true for nearly all of the releases since we moved to
 this model, however.

 It seems a hard choice to skip documenting parameters when there is
 still actually time to get them into the published artifacts of the
 documentation. I of course understand the need for the source artifacts to
 be correct, my point is only that we haven't been strict about it for the
 past year.

 I'm flying today, so won't be able to build the Ref Guide RC until late
 tonight/tomorrow morning, so there's time Anshum if you want to backport to
 branch_7_4 & others are also OK with it despite the 7.4 RC being available.

 However, your commit has a typo:

 "...the intention is to restric the size..."

 Should be "restrict" instead.

 Cassandra

 On Mon, Jun 18, 2018 at 3:24 PM Adrien Grand  wrote:

> Since artifacts are available, I'll start a vote. I'm happy to respin
> if we decide to.
>
> Le lun. 18 juin 2018 à 19:52, Uwe Schindler  a
> écrit :
>
>> Hi Anshum,
>>
>>
>>
>> I was talking about **source** artifacts. Those will miss the
>> commit, because it’s a tar.gz of whole source tree 
>> (lucene+solr+refguide)!
>>
>>
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>> 
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* Anshum Gupta 
>> *Sent:* Monday, June 18, 2018 7:35 PM
>> *To:* dev@lucene.apache.org
>> *Subject:* Re: Lucene/Solr 7.4
>>
>>
>>
>> The release binaries would miss the commit, but I don't think that we
>> package the ref guide with they binaries so it should be ok.
>>
>>
>>
>> I'll push this to master for now and wait for Cassandra to confirm
>> (or if someone else knows).
>>
>>
>>
>> On Mon, Jun 18, 2018 at 10:18 AM Uwe Schindler 
>> wrote:
>>
>> I think the source artifacts may miss the commit then. But that's not
>> urgent, isn't it?
>>
>>
>>
>> Uwe
>>
>>
>>
>> Am June 18, 2018 5:09:02 PM UTC schrieb Adrien Grand <
>> jpou...@gmail.com>:
>>
>> Hi Anshum,
>>
>> I am in the process of uploading artifacts that passed precommit
>> locally. Your changes seem to be only about the reference guide, which I
>> think is built separately? I think that means that I could proceed with 
>> the
>> current artifacts that I have, let you push your commit, and then we will
>> just need to make sure that your commit is included when we build a 
>> release
>> candidate of the ref guide?
>>
>> Hopefully Cassandra can confirm some of my assumptions.
>>
>>
>>
>> Le lun. 18 juin 2018 à 18:53, Anshum Gupta 
>> a écrit :
>>
>> Hi Adrien,
>>
>>
>>
>> Is it ok for my to commit the documentation patch for SOLR-11277?
>>
>>
>>
>> Anshum
>>
>>
>>
>> On Mon, Jun 18, 2018 at 2:24 AM Alan Woodward 
>>>

Re: Lucene/Solr 7.4

2018-06-22 Thread Cassandra Targett
I'm sorry I wasn't able to initiate the vote for the Ref Guide earlier this
week as I'd intended - I don't like to do the artifact upload if I think
I'll be interrupted before finishing all the steps, and traveling this week
introduced more interruptions than I'd anticipated in the first part of the
week.

I'm uploading artifacts this morning, so will start the vote today.

On Fri, Jun 22, 2018 at 7:54 AM Adrien Grand  wrote:

> Note that one important change compared to previous releases is that we
> are now pointing users to https://lucene.apache.org/core/downloads and
> http://lucene.apache.org/solr/downloads for downloads, which have been
> updated in order to pass the ASF requirements for download pages[1]. Please
> let me know if you notice anything wrong with these pages.
>
> [1] https://www.apache.org/dev/release-download-pages
>
> Le ven. 22 juin 2018 à 11:31, Adrien Grand  a écrit :
>
>> I wrote some release notes for Lucene[1] and just a skeleton for Solr[2].
>> Can someone help me with the Solr release notes? Also feel free to add
>> items to the Lucene release notes if you think that they are
>> release-notes-worthy. Thanks!
>>
>> [1] https://wiki.apache.org/lucene-java/ReleaseNote74
>> [2] https://wiki.apache.org/solr/ReleaseNote74
>>
>> Le lun. 18 juin 2018 à 22:42, Cassandra Targett 
>> a écrit :
>>
>>> Re the Ref Guide changes: it's true the source artifacts will miss
>>> these. That's been true for nearly all of the releases since we moved to
>>> this model, however.
>>>
>>> It seems a hard choice to skip documenting parameters when there is
>>> still actually time to get them into the published artifacts of the
>>> documentation. I of course understand the need for the source artifacts to
>>> be correct, my point is only that we haven't been strict about it for the
>>> past year.
>>>
>>> I'm flying today, so won't be able to build the Ref Guide RC until late
>>> tonight/tomorrow morning, so there's time Anshum if you want to backport to
>>> branch_7_4 & others are also OK with it despite the 7.4 RC being available.
>>>
>>> However, your commit has a typo:
>>>
>>> "...the intention is to restric the size..."
>>>
>>> Should be "restrict" instead.
>>>
>>> Cassandra
>>>
>>> On Mon, Jun 18, 2018 at 3:24 PM Adrien Grand  wrote:
>>>
 Since artifacts are available, I'll start a vote. I'm happy to respin
 if we decide to.

 Le lun. 18 juin 2018 à 19:52, Uwe Schindler  a écrit :

> Hi Anshum,
>
>
>
> I was talking about **source** artifacts. Those will miss the commit,
> because it’s a tar.gz of whole source tree (lucene+solr+refguide)!
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
> 
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Anshum Gupta 
> *Sent:* Monday, June 18, 2018 7:35 PM
> *To:* dev@lucene.apache.org
> *Subject:* Re: Lucene/Solr 7.4
>
>
>
> The release binaries would miss the commit, but I don't think that we
> package the ref guide with they binaries so it should be ok.
>
>
>
> I'll push this to master for now and wait for Cassandra to confirm (or
> if someone else knows).
>
>
>
> On Mon, Jun 18, 2018 at 10:18 AM Uwe Schindler 
> wrote:
>
> I think the source artifacts may miss the commit then. But that's not
> urgent, isn't it?
>
>
>
> Uwe
>
>
>
> Am June 18, 2018 5:09:02 PM UTC schrieb Adrien Grand <
> jpou...@gmail.com>:
>
> Hi Anshum,
>
> I am in the process of uploading artifacts that passed precommit
> locally. Your changes seem to be only about the reference guide, which I
> think is built separately? I think that means that I could proceed with 
> the
> current artifacts that I have, let you push your commit, and then we will
> just need to make sure that your commit is included when we build a 
> release
> candidate of the ref guide?
>
> Hopefully Cassandra can confirm some of my assumptions.
>
>
>
> Le lun. 18 juin 2018 à 18:53, Anshum Gupta  a
> écrit :
>
> Hi Adrien,
>
>
>
> Is it ok for my to commit the documentation patch for SOLR-11277?
>
>
>
> Anshum
>
>
>
> On Mon, Jun 18, 2018 at 2:24 AM Alan Woodward 
> wrote:
>
> LUCENE-8360 is committed.
>
>
>
>
>
> On 18 Jun 2018, at 08:38, Adrien Grand  wrote:
>
>
>
> The fix looks good to me, let's get it in before building the first RC?
>
>
>
> Le dim. 17 juin 2018 à 22:39, Alan Woodward  a
> écrit :
>
> I’m still debugging RandomChains failures, and found
> https://issues.apache.org/jira/browse/LUCENE-8360.  I don’t think
> it’s a Blocke

[jira] [Closed] (SOLR-12512) New Drive Change for existing Solr Installation Setup

2018-06-22 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-12512.


> New Drive Change for existing Solr Installation Setup
> -
>
> Key: SOLR-12512
> URL: https://issues.apache.org/jira/browse/SOLR-12512
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.4.2
>Reporter: Srinivas M
>Priority: Blocker
>  Labels: features
>
> Hi Solr Team,
> As part of Solr project the installation setup and instances(including 
> clustered solr, zk services and indexing jobs schedulers) is available in 
> Windows 'E:\ ' drive in production environment. As business needs to remove 
> the E:\ drive, going forward D:\  drive will be used and operational.
> Is there any possible solution/steps for the moving existing solr 
> installation setup from 'E' drive to 'D' Drive(New Drive) without any impact 
> to the existing application(it should not create re indexing again)
> Please let us know your suggestions/solutions.
> Your earliest help will be appreciated!!
> Thanks,
> Srinivas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12512) New Drive Change for existing Solr Installation Setup

2018-06-22 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-12512.
--
   Resolution: Invalid
Fix Version/s: (was: 6.4.3)

This is not a bug/feature for Solr, but a configuration issue for your system. 
You are already getting help on the mailing list, which is the correct approach.

> New Drive Change for existing Solr Installation Setup
> -
>
> Key: SOLR-12512
> URL: https://issues.apache.org/jira/browse/SOLR-12512
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.4.2
>Reporter: Srinivas M
>Priority: Blocker
>  Labels: features
>
> Hi Solr Team,
> As part of Solr project the installation setup and instances(including 
> clustered solr, zk services and indexing jobs schedulers) is available in 
> Windows 'E:\ ' drive in production environment. As business needs to remove 
> the E:\ drive, going forward D:\  drive will be used and operational.
> Is there any possible solution/steps for the moving existing solr 
> installation setup from 'E' drive to 'D' Drive(New Drive) without any impact 
> to the existing application(it should not create re indexing again)
> Please let us know your suggestions/solutions.
> Your earliest help will be appreciated!!
> Thanks,
> Srinivas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12398) Make JSON Facet API support Heatmap Facet

2018-06-22 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520321#comment-16520321
 ] 

David Smiley commented on SOLR-12398:
-

Updated patch with Solr Ref Guide stuff.
 * New section on the JSON Facet API page.  It has an example request & 
response, though I didn't document the parameters here – instead I pointed to 
the existing docs on the Spatial Search page.
 * Updated the Spatial Search page's heatmap section to note the existence of 
the JSON Facet API as an option.

Precommit is happy and tests pass so I think it's all committable.

> Make JSON Facet API support Heatmap Facet
> -
>
> Key: SOLR-12398
> URL: https://issues.apache.org/jira/browse/SOLR-12398
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, JSON Request API, spatial
>Reporter: Jaime Yap
>Assignee: David Smiley
>Priority: Major
>  Labels: heatmap
> Attachments: SOLR-12398.patch, SOLR-12398.patch
>
>
> The JSON query Facet API does not support Heatmap facets. For companies that 
> have standardized around generating queries for the JSON query API, it is a 
> major wart to need to also support falling back to the param encoding API in 
> order to make use of them.
> More importantly however, given it's more natural support for nested 
> subfacets, the JSON Query facet API is be able to compute more interesting 
> Heatmap layers for each facet bucket. Without resorting to the older (and 
> much more awkward) facet pivot syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12398) Make JSON Facet API support Heatmap Facet

2018-06-22 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12398:

Attachment: SOLR-12398.patch

> Make JSON Facet API support Heatmap Facet
> -
>
> Key: SOLR-12398
> URL: https://issues.apache.org/jira/browse/SOLR-12398
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, JSON Request API, spatial
>Reporter: Jaime Yap
>Assignee: David Smiley
>Priority: Major
>  Labels: heatmap
> Attachments: SOLR-12398.patch, SOLR-12398.patch
>
>
> The JSON query Facet API does not support Heatmap facets. For companies that 
> have standardized around generating queries for the JSON query API, it is a 
> major wart to need to also support falling back to the param encoding API in 
> order to make use of them.
> More importantly however, given it's more natural support for nested 
> subfacets, the JSON Query facet API is be able to compute more interesting 
> Heatmap layers for each facet bucket. Without resorting to the older (and 
> much more awkward) facet pivot syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.4

2018-06-22 Thread Adrien Grand
Note that one important change compared to previous releases is that we are
now pointing users to https://lucene.apache.org/core/downloads and
http://lucene.apache.org/solr/downloads for downloads, which have been
updated in order to pass the ASF requirements for download pages[1]. Please
let me know if you notice anything wrong with these pages.

[1] https://www.apache.org/dev/release-download-pages

Le ven. 22 juin 2018 à 11:31, Adrien Grand  a écrit :

> I wrote some release notes for Lucene[1] and just a skeleton for Solr[2].
> Can someone help me with the Solr release notes? Also feel free to add
> items to the Lucene release notes if you think that they are
> release-notes-worthy. Thanks!
>
> [1] https://wiki.apache.org/lucene-java/ReleaseNote74
> [2] https://wiki.apache.org/solr/ReleaseNote74
>
> Le lun. 18 juin 2018 à 22:42, Cassandra Targett  a
> écrit :
>
>> Re the Ref Guide changes: it's true the source artifacts will miss these.
>> That's been true for nearly all of the releases since we moved to this
>> model, however.
>>
>> It seems a hard choice to skip documenting parameters when there is still
>> actually time to get them into the published artifacts of the
>> documentation. I of course understand the need for the source artifacts to
>> be correct, my point is only that we haven't been strict about it for the
>> past year.
>>
>> I'm flying today, so won't be able to build the Ref Guide RC until late
>> tonight/tomorrow morning, so there's time Anshum if you want to backport to
>> branch_7_4 & others are also OK with it despite the 7.4 RC being available.
>>
>> However, your commit has a typo:
>>
>> "...the intention is to restric the size..."
>>
>> Should be "restrict" instead.
>>
>> Cassandra
>>
>> On Mon, Jun 18, 2018 at 3:24 PM Adrien Grand  wrote:
>>
>>> Since artifacts are available, I'll start a vote. I'm happy to respin if
>>> we decide to.
>>>
>>> Le lun. 18 juin 2018 à 19:52, Uwe Schindler  a écrit :
>>>
 Hi Anshum,



 I was talking about **source** artifacts. Those will miss the commit,
 because it’s a tar.gz of whole source tree (lucene+solr+refguide)!



 Uwe



 -

 Uwe Schindler

 Achterdiek 19, D-28357 Bremen
 

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 *From:* Anshum Gupta 
 *Sent:* Monday, June 18, 2018 7:35 PM
 *To:* dev@lucene.apache.org
 *Subject:* Re: Lucene/Solr 7.4



 The release binaries would miss the commit, but I don't think that we
 package the ref guide with they binaries so it should be ok.



 I'll push this to master for now and wait for Cassandra to confirm (or
 if someone else knows).



 On Mon, Jun 18, 2018 at 10:18 AM Uwe Schindler  wrote:

 I think the source artifacts may miss the commit then. But that's not
 urgent, isn't it?



 Uwe



 Am June 18, 2018 5:09:02 PM UTC schrieb Adrien Grand >>> >:

 Hi Anshum,

 I am in the process of uploading artifacts that passed precommit
 locally. Your changes seem to be only about the reference guide, which I
 think is built separately? I think that means that I could proceed with the
 current artifacts that I have, let you push your commit, and then we will
 just need to make sure that your commit is included when we build a release
 candidate of the ref guide?

 Hopefully Cassandra can confirm some of my assumptions.



 Le lun. 18 juin 2018 à 18:53, Anshum Gupta  a
 écrit :

 Hi Adrien,



 Is it ok for my to commit the documentation patch for SOLR-11277?



 Anshum



 On Mon, Jun 18, 2018 at 2:24 AM Alan Woodward 
 wrote:

 LUCENE-8360 is committed.





 On 18 Jun 2018, at 08:38, Adrien Grand  wrote:



 The fix looks good to me, let's get it in before building the first RC?



 Le dim. 17 juin 2018 à 22:39, Alan Woodward  a
 écrit :

 I’m still debugging RandomChains failures, and found
 https://issues.apache.org/jira/browse/LUCENE-8360.  I don’t think it’s
 a Blocker though, and there will probably be more odd corners that
 ConditionalTokenFilter has exposed which can wait for 7.4.1  or a respin.





 On 15 Jun 2018, at 11:04, Simon Willnauer 
 wrote:



 this issue is fixed

 On Fri, Jun 15, 2018 at 10:54 AM, Simon Willnauer
  wrote:

 our CI found a failure, I opened a blocker and attached a patch:
 https://issues.apache.org/jira/browse/LUCENE-8358

 On Fri, Jun 15, 2018 at 9:15 AM, Simon Willnauer
  wrote:

 +1 for a first RC

 On Fri, Jun 15, 2018 at 9:08 AM, Adrien Grand 
 wrote:
>

[jira] [Created] (SOLR-12512) New Drive Change for existing Solr Installation Setup

2018-06-22 Thread Srinivas M (JIRA)
Srinivas M created SOLR-12512:
-

 Summary: New Drive Change for existing Solr Installation Setup
 Key: SOLR-12512
 URL: https://issues.apache.org/jira/browse/SOLR-12512
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.4.2
Reporter: Srinivas M
 Fix For: 6.4.3


Hi Solr Team,

As part of Solr project the installation setup and instances(including 
clustered solr, zk services and indexing jobs schedulers) is available in 
Windows 'E:\ ' drive in production environment. As business needs to remove the 
E:\ drive, going forward D:\  drive will be used and operational.

Is there any possible solution/steps for the moving existing solr installation 
setup from 'E' drive to 'D' Drive(New Drive) without any impact to the existing 
application(it should not create re indexing again)

Please let us know your suggestions/solutions.

Your earliest help will be appreciated!!

Thanks,
Srinivas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7314) Graduate InetAddressPoint and LatLonPoint to core

2018-06-22 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520273#comment-16520273
 ] 

Lucene/Solr QA commented on LUCENE-7314:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 31m 
27s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} sandbox in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-7314 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928654/LUCENE-7314.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 25e7631 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/37/testReport/ |
| modules | C: lucene lucene/core lucene/sandbox U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/37/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Graduate InetAddressPoint and LatLonPoint to core
> -
>
> Key: LUCENE-7314
> URL: https://issues.apache.org/jira/browse/LUCENE-7314
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7314.patch
>
>
> Maybe we should graduate these fields (and related queries) to core for 
> Lucene 6.1?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-22 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520242#comment-16520242
 ] 

Lucene/Solr QA commented on SOLR-12458:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check licenses {color} | {color:green} 
 2m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  2m 37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 45s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.store.blockcache.BlockDirectoryTest |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928662/SOLR-12458.patch |
| Optional Tests |  checklicenses  validatesourcepatterns  ratsources  compile  
javac  unit  checkforbiddenapis  validaterefguide  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 25e7631 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/131/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/131/testReport/ |
| modules | C: lucene solr solr/core solr/solr-ref-guide U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/131/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-22 Thread Alexandre Rafalovitch
Maybe it should be spatial2D vs. spatial3D then? To avoid sounding
that spatial3D is a child of spatial.

Regards,
   Alex.

On 22 June 2018 at 05:41, Karl Wright  wrote:
> The abstractions in spatial3d are not the same abstractions as you find in
> spatial, and yet they sound similar.  So I'd worry that if we threw them
> together we'd be setting ourselves up for a shotgun marriage at some point.
> I would strongly disagree that that was in any way a good idea.
>
> The fundamental difference between the two is one treats the world as a 2D
> surface, and the other treats the world as an actual ellipsoid.  So it's not
> just about numeric accuracy, since lines in 2D are not the proper great
> circles, and there are singularities at the poles.
>
> Karl
>
>
> On Fri, Jun 22, 2018 at 4:44 AM Alan Woodward  wrote:
>>
>> I don’t normally speak up on spatial issues because I don’t know anything
>> about spatial stuff, but I suppose a point of view from somebody outside the
>> code may be helpful, so…
>>
>> I think I’d lean towards B. Having the 99% case in core makes most sense
>> to me, and it means that we can add some pointers to the search package-info
>> to make it easier for people starting out.  Common interfaces in core make
>> it easier to put specialist classes into separate modules without having
>> cross-dependencies.
>>
>> I’m not sure that having separate ‘spatial’ and ‘spatial3d’ modules is
>> particularly useful, though.  I’d combine these into a single module, with
>> clear package docs explaining what each part is useful for - fast shape
>> searching vs high-precision, etc.
>>
>> I spent a bit of time in the spatial-extras code last year when I was
>> working on replacing ValueSource.  One question I have, again as an outsider
>> to all this, is this: are there still circumstances where indexing spatial
>> data into the terms index, as spatial-extras does, is better in terms of
>> accuracy or performance than using the Points API?  Or should we think about
>> spatial-extras as we do about the legacy numeric encodings, and direct users
>> to LatLonPoint or the geo3d classes instead?  It’s not at all clear to me
>> what the trade-offs are here.
>>
>> - Alan
>>
>> On 20 Jun 2018, at 18:00, Nicholas Knize  wrote:
>>
>> If I were to pick between the two, I also have a preference for B.  I've
>> also tried to keep this whole spatial organization rather simple:
>>
>> core - simple spatial capabilities needed by the 99% spatial use case
>> (e.g., web mapping). Includes LatLonPoint, polygon & distance search
>> (everything currently in sandbox). Lightweight, and no dependencies or
>> complexities. If one wants simple and fast point search, all you need is the
>> core module.
>>
>> spatial - dependency free. Expands on core spatial to include simple shape
>> searching. Uses internal relations. Everything confined to core and spatial
>> modules.
>>
>> spatial-extras - expanded spatial capabilities. Welcomes third-party
>> dependencies (e.g., S3, SIS, Proj4J). Targets more advanced/expert GIS
>> use-cases.
>>
>> geo3d - trades speed for accuracy. I've always struggled with the name,
>> since it implies 3D shapes/point cloud support. But history has shown
>> considering a name change to be a bike-shedding endeavor.
>>
>> At the end of the day I'm up for whatever makes most sense for everyone
>> here. Lord knows we could use more people helping out on geo.
>>
>> - Nick
>>
>>
>>
>> On Wed, Jun 20, 2018 at 11:40 AM Adrien Grand  wrote:
>>>
>>> I have a slight preference for B similarly to how StandardAnalyzer is in
>>> core and other analyzers are in analysis, but no strong feelings. In any
>>> case I agree that both A and B would be much better than the current
>>> situation.
>>>
>>>
>>> Le mer. 20 juin 2018 à 18:09, David Smiley  a
>>> écrit :

 I think everyone agrees the current state of spatial code organization
 in Lucene is not desirable.  We have a spatial module that has almost
 nothing in it, we have mature spatial code in the sandbox that needs to
 "graduate" somewhere, and we've got a handful of geo utilities in Lucene
 core (mostly because I didn't notice).  No agreement has been reached on
 what the desired state should be.

 I'd like to hear opinions on this from members of the community.  I am
 especially interested in listening to people that normally don't seem to
 speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
 respect both of you guys a ton for your tenure with Lucene and aren't too
 pushy with your opinions. I can be convinced to change my mind, especially
 if coming from you two.  Of course anyone can respond -- this is an open
 discussion!

 As I understand it, there are two proposals loosely defined as follows:

 (A) Common spatial needs will be met in the "spatial" module.  The
 Lucene "spatial" module, currently in a weird gutted state, should have
 basically all spatial code

[jira] [Commented] (LUCENE-8367) Make per-dimension drill down optional for each facet dimension

2018-06-22 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520218#comment-16520218
 ] 

Michael McCandless commented on LUCENE-8367:


{quote}Should the DoubleRange equals() compare bits for safety like 
Double.equals()? Otherwise with == its a bit smelly and buggy (-0 vs 0 and so 
on).
{quote}
Oh good catch!  Sneaky ... I'll switch to {{Double.equals}}.  Thanks [~rcmuir]!

> Make per-dimension drill down optional for each facet dimension
> ---
>
> Key: LUCENE-8367
> URL: https://issues.apache.org/jira/browse/LUCENE-8367
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Major
> Attachments: LUCENE-8367.patch
>
>
> Today, when you index a {{FacetField}} with path {{foo/bar,}} we index two 
> drill down terms onto the document: {{foo}} and {{foo/bar}}.
> But I suspect some users (like me!) don't need to drilldown just on {{foo}} 
> (effectively "find all documents that have any value for this facet 
> dimension"), so I added an option to {{FacetsConfig}} to let you specify 
> per-dimension whether you need to drill down (defaults to true, matching 
> current behavior).
> I also added {{hashCode}} and {{equals}} to the {{LongRange}} and 
> {{DoubleRange}} classes in facets module, and improved {{CheckIndex}} a bit 
> to print the total %deletions across the index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-22 Thread Karl Wright
The abstractions in spatial3d are not the same abstractions as you find in
spatial, and yet they sound similar.  So I'd worry that if we threw them
together we'd be setting ourselves up for a shotgun marriage at some
point.  I would strongly disagree that that was in any way a good idea.

The fundamental difference between the two is one treats the world as a 2D
surface, and the other treats the world as an actual ellipsoid.  So it's
not just about numeric accuracy, since lines in 2D are not the proper great
circles, and there are singularities at the poles.

Karl


On Fri, Jun 22, 2018 at 4:44 AM Alan Woodward  wrote:

> I don’t normally speak up on spatial issues because I don’t know anything
> about spatial stuff, but I suppose a point of view from somebody outside
> the code may be helpful, so…
>
> I think I’d lean towards B. Having the 99% case in core makes most sense
> to me, and it means that we can add some pointers to the search
> package-info to make it easier for people starting out.  Common interfaces
> in core make it easier to put specialist classes into separate modules
> without having cross-dependencies.
>
> I’m not sure that having separate ‘spatial’ and ‘spatial3d’ modules is
> particularly useful, though.  I’d combine these into a single module, with
> clear package docs explaining what each part is useful for - fast shape
> searching vs high-precision, etc.
>
> I spent a bit of time in the spatial-extras code last year when I was
> working on replacing ValueSource.  One question I have, again as an
> outsider to all this, is this: are there still circumstances where indexing
> spatial data into the terms index, as spatial-extras does, is better in
> terms of accuracy or performance than using the Points API?  Or should we
> think about spatial-extras as we do about the legacy numeric encodings, and
> direct users to LatLonPoint or the geo3d classes instead?  It’s not at all
> clear to me what the trade-offs are here.
>
> - Alan
>
> On 20 Jun 2018, at 18:00, Nicholas Knize  wrote:
>
> If I were to pick between the two, I also have a preference for B.  I've
> also tried to keep this whole spatial organization rather simple:
>
> core - simple spatial capabilities needed by the 99% spatial use case
> (e.g., web mapping). Includes LatLonPoint, polygon & distance search
> (everything currently in sandbox). Lightweight, and no dependencies or
> complexities. If one wants simple and fast point search, all you need is
> the core module.
>
> spatial - dependency free. Expands on core spatial to include simple shape
> searching. Uses internal relations. Everything confined to core and spatial
> modules.
>
> spatial-extras - expanded spatial capabilities. Welcomes third-party
> dependencies (e.g., S3, SIS, Proj4J). Targets more advanced/expert GIS
> use-cases.
>
> geo3d - trades speed for accuracy. I've always struggled with the name,
> since it implies 3D shapes/point cloud support. But history has shown
> considering a name change to be a bike-shedding endeavor.
>
> At the end of the day I'm up for whatever makes most sense for everyone
> here. Lord knows we could use more people helping out on geo.
>
> - Nick
>
>
>
> On Wed, Jun 20, 2018 at 11:40 AM Adrien Grand  wrote:
>
>> I have a slight preference for B similarly to how StandardAnalyzer is in
>> core and other analyzers are in analysis, but no strong feelings. In any
>> case I agree that both A and B would be much better than the current
>> situation.
>>
>>
>> Le mer. 20 juin 2018 à 18:09, David Smiley  a
>> écrit :
>>
>>> I think everyone agrees the current state of spatial code organization
>>> in Lucene is not desirable.  We have a spatial module that has almost
>>> nothing in it, we have mature spatial code in the sandbox that needs to
>>> "graduate" somewhere, and we've got a handful of geo utilities in Lucene
>>> core (mostly because I didn't notice).  No agreement has been reached on
>>> what the desired state should be.
>>>
>>> I'd like to hear opinions on this from members of the community.  I am
>>> especially interested in listening to people that normally don't seem to
>>> speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
>>> respect both of you guys a ton for your tenure with Lucene and aren't too
>>> pushy with your opinions. I can be convinced to change my mind, especially
>>> if coming from you two.  Of course anyone can respond -- this is an open
>>> discussion!
>>>
>>> As I understand it, there are two proposals loosely defined as follows:
>>>
>>> (A) Common spatial needs will be met in the "spatial" module.  The
>>> Lucene "spatial" module, currently in a weird gutted state, should have
>>> basically all spatial code currently in sandbox plus all geo stuff in
>>> Lucene core. Thus there will be no geo stuff in Lucene core.
>>>
>>> (B) Common spatial needs will be met by Lucene core.  Lucene core should
>>> expand it's current "geo" utilities to include the spatial stuff currently
>>> in the sandb

Re: Lucene/Solr 7.4

2018-06-22 Thread Adrien Grand
I wrote some release notes for Lucene[1] and just a skeleton for Solr[2].
Can someone help me with the Solr release notes? Also feel free to add
items to the Lucene release notes if you think that they are
release-notes-worthy. Thanks!

[1] https://wiki.apache.org/lucene-java/ReleaseNote74
[2] https://wiki.apache.org/solr/ReleaseNote74

Le lun. 18 juin 2018 à 22:42, Cassandra Targett  a
écrit :

> Re the Ref Guide changes: it's true the source artifacts will miss these.
> That's been true for nearly all of the releases since we moved to this
> model, however.
>
> It seems a hard choice to skip documenting parameters when there is still
> actually time to get them into the published artifacts of the
> documentation. I of course understand the need for the source artifacts to
> be correct, my point is only that we haven't been strict about it for the
> past year.
>
> I'm flying today, so won't be able to build the Ref Guide RC until late
> tonight/tomorrow morning, so there's time Anshum if you want to backport to
> branch_7_4 & others are also OK with it despite the 7.4 RC being available.
>
> However, your commit has a typo:
>
> "...the intention is to restric the size..."
>
> Should be "restrict" instead.
>
> Cassandra
>
> On Mon, Jun 18, 2018 at 3:24 PM Adrien Grand  wrote:
>
>> Since artifacts are available, I'll start a vote. I'm happy to respin if
>> we decide to.
>>
>> Le lun. 18 juin 2018 à 19:52, Uwe Schindler  a écrit :
>>
>>> Hi Anshum,
>>>
>>>
>>>
>>> I was talking about **source** artifacts. Those will miss the commit,
>>> because it’s a tar.gz of whole source tree (lucene+solr+refguide)!
>>>
>>>
>>>
>>> Uwe
>>>
>>>
>>>
>>> -
>>>
>>> Uwe Schindler
>>>
>>> Achterdiek 19, D-28357 Bremen
>>> 
>>>
>>> http://www.thetaphi.de
>>>
>>> eMail: u...@thetaphi.de
>>>
>>>
>>>
>>> *From:* Anshum Gupta 
>>> *Sent:* Monday, June 18, 2018 7:35 PM
>>> *To:* dev@lucene.apache.org
>>> *Subject:* Re: Lucene/Solr 7.4
>>>
>>>
>>>
>>> The release binaries would miss the commit, but I don't think that we
>>> package the ref guide with they binaries so it should be ok.
>>>
>>>
>>>
>>> I'll push this to master for now and wait for Cassandra to confirm (or
>>> if someone else knows).
>>>
>>>
>>>
>>> On Mon, Jun 18, 2018 at 10:18 AM Uwe Schindler  wrote:
>>>
>>> I think the source artifacts may miss the commit then. But that's not
>>> urgent, isn't it?
>>>
>>>
>>>
>>> Uwe
>>>
>>>
>>>
>>> Am June 18, 2018 5:09:02 PM UTC schrieb Adrien Grand >> >:
>>>
>>> Hi Anshum,
>>>
>>> I am in the process of uploading artifacts that passed precommit
>>> locally. Your changes seem to be only about the reference guide, which I
>>> think is built separately? I think that means that I could proceed with the
>>> current artifacts that I have, let you push your commit, and then we will
>>> just need to make sure that your commit is included when we build a release
>>> candidate of the ref guide?
>>>
>>> Hopefully Cassandra can confirm some of my assumptions.
>>>
>>>
>>>
>>> Le lun. 18 juin 2018 à 18:53, Anshum Gupta  a
>>> écrit :
>>>
>>> Hi Adrien,
>>>
>>>
>>>
>>> Is it ok for my to commit the documentation patch for SOLR-11277?
>>>
>>>
>>>
>>> Anshum
>>>
>>>
>>>
>>> On Mon, Jun 18, 2018 at 2:24 AM Alan Woodward 
>>> wrote:
>>>
>>> LUCENE-8360 is committed.
>>>
>>>
>>>
>>>
>>>
>>> On 18 Jun 2018, at 08:38, Adrien Grand  wrote:
>>>
>>>
>>>
>>> The fix looks good to me, let's get it in before building the first RC?
>>>
>>>
>>>
>>> Le dim. 17 juin 2018 à 22:39, Alan Woodward  a
>>> écrit :
>>>
>>> I’m still debugging RandomChains failures, and found
>>> https://issues.apache.org/jira/browse/LUCENE-8360.  I don’t think it’s
>>> a Blocker though, and there will probably be more odd corners that
>>> ConditionalTokenFilter has exposed which can wait for 7.4.1  or a respin.
>>>
>>>
>>>
>>>
>>>
>>> On 15 Jun 2018, at 11:04, Simon Willnauer 
>>> wrote:
>>>
>>>
>>>
>>> this issue is fixed
>>>
>>> On Fri, Jun 15, 2018 at 10:54 AM, Simon Willnauer
>>>  wrote:
>>>
>>> our CI found a failure, I opened a blocker and attached a patch:
>>> https://issues.apache.org/jira/browse/LUCENE-8358
>>>
>>> On Fri, Jun 15, 2018 at 9:15 AM, Simon Willnauer
>>>  wrote:
>>>
>>> +1 for a first RC
>>>
>>> On Fri, Jun 15, 2018 at 9:08 AM, Adrien Grand  wrote:
>>>
>>> It looks like blockers are all resolved, please let me know if I am
>>> missing
>>> something. I will build a first RC on Monday.
>>>
>>> Le jeu. 14 juin 2018 à 15:02, Alan Woodward  a
>>> écrit :
>>>
>>>
>>> LUCENE-8357 is in.
>>>
>>> On 14 Jun 2018, at 09:27, Adrien Grand  wrote:
>>>
>>> +1
>>>
>>> Le jeu. 14 juin 2018 à 10:02, Alan Woodward  a
>>> écrit
>>> :
>>>
>>>
>>> Hi Adrien,
>>>
>>> If possible I’d like to get LUCENE-8357 in, which fixes a regression in
>>> Explanations for Solr’s boost queries.
>>>
>>> Alan
>>>
>>>
>>> On 13 Jun 2018, at 20:42, Adrien Grand  wrote:
>>>
>>> It is. In general I trust your judgement to only

Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-22 Thread Alan Woodward
I don’t normally speak up on spatial issues because I don’t know anything about 
spatial stuff, but I suppose a point of view from somebody outside the code may 
be helpful, so…

I think I’d lean towards B. Having the 99% case in core makes most sense to me, 
and it means that we can add some pointers to the search package-info to make 
it easier for people starting out.  Common interfaces in core make it easier to 
put specialist classes into separate modules without having cross-dependencies.

I’m not sure that having separate ‘spatial’ and ‘spatial3d’ modules is 
particularly useful, though.  I’d combine these into a single module, with 
clear package docs explaining what each part is useful for - fast shape 
searching vs high-precision, etc.

I spent a bit of time in the spatial-extras code last year when I was working 
on replacing ValueSource.  One question I have, again as an outsider to all 
this, is this: are there still circumstances where indexing spatial data into 
the terms index, as spatial-extras does, is better in terms of accuracy or 
performance than using the Points API?  Or should we think about spatial-extras 
as we do about the legacy numeric encodings, and direct users to LatLonPoint or 
the geo3d classes instead?  It’s not at all clear to me what the trade-offs are 
here.

- Alan

> On 20 Jun 2018, at 18:00, Nicholas Knize  wrote:
> 
> If I were to pick between the two, I also have a preference for B.  I've also 
> tried to keep this whole spatial organization rather simple:
> 
> core - simple spatial capabilities needed by the 99% spatial use case (e.g., 
> web mapping). Includes LatLonPoint, polygon & distance search (everything 
> currently in sandbox). Lightweight, and no dependencies or complexities. If 
> one wants simple and fast point search, all you need is the core module.
> 
> spatial - dependency free. Expands on core spatial to include simple shape 
> searching. Uses internal relations. Everything confined to core and spatial 
> modules.
> 
> spatial-extras - expanded spatial capabilities. Welcomes third-party 
> dependencies (e.g., S3, SIS, Proj4J). Targets more advanced/expert GIS 
> use-cases.
> 
> geo3d - trades speed for accuracy. I've always struggled with the name, since 
> it implies 3D shapes/point cloud support. But history has shown considering a 
> name change to be a bike-shedding endeavor. 
> 
> At the end of the day I'm up for whatever makes most sense for everyone here. 
> Lord knows we could use more people helping out on geo.
> 
> - Nick
> 
> 
> 
> On Wed, Jun 20, 2018 at 11:40 AM Adrien Grand  > wrote:
> I have a slight preference for B similarly to how StandardAnalyzer is in core 
> and other analyzers are in analysis, but no strong feelings. In any case I 
> agree that both A and B would be much better than the current situation.
> 
> 
> Le mer. 20 juin 2018 à 18:09, David Smiley  > a écrit :
> I think everyone agrees the current state of spatial code organization in 
> Lucene is not desirable.  We have a spatial module that has almost nothing in 
> it, we have mature spatial code in the sandbox that needs to "graduate" 
> somewhere, and we've got a handful of geo utilities in Lucene core (mostly 
> because I didn't notice).  No agreement has been reached on what the desired 
> state should be.
> 
> I'd like to hear opinions on this from members of the community.  I am 
> especially interested in listening to people that normally don't seem to 
> speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I 
> respect both of you guys a ton for your tenure with Lucene and aren't too 
> pushy with your opinions. I can be convinced to change my mind, especially if 
> coming from you two.  Of course anyone can respond -- this is an open 
> discussion!
> 
> As I understand it, there are two proposals loosely defined as follows:
> 
> (A) Common spatial needs will be met in the "spatial" module.  The Lucene 
> "spatial" module, currently in a weird gutted state, should have basically 
> all spatial code currently in sandbox plus all geo stuff in Lucene core. Thus 
> there will be no geo stuff in Lucene core.
> 
> (B) Common spatial needs will be met by Lucene core.  Lucene core should 
> expand it's current "geo" utilities to include the spatial stuff currently in 
> the sandbox module.  It'd also take on what little remains in the Lucene 
> spatial module and thus we can remove the spatial module. 
> 
> With either plan if a user has certain advanced/specialized needs they may 
> need to go to spatial3d or spatial-extras modules.  These would be untouched 
> in both proposals.
> 
> I'm in favor of (A) on the grounds that we have modules for special feature 
> areas, and spatial should be no different.  My gut estimation is that 75-90% 
> of apps do not have spatial requirements and need not depend on any spatial 
> module.  Other modules are probably used more (e.g. queries, suggest,

[JENKINS] Lucene-Solr-repro - Build # 866 - Unstable

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/866/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/81/consoleText

[repro] Revision: 3a2ec9baf8c213606929a5484a6a642c4f48a75f

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=F057BA3F9E752B32 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-CL 
-Dtests.timezone=CAT -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=F057BA3F9E752B32 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=uk-UA -Dtests.timezone=Indian/Mahe -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
25e7631b9014a5d0729be7926313c498df1dc606
[repro] git fetch
[repro] git checkout 3a2ec9baf8c213606929a5484a6a642c4f48a75f

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro]   SolrRrdBackendFactoryTest
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.ScheduledMaintenanceTriggerTest|*.SolrRrdBackendFactoryTest" 
-Dtests.showOutput=onerror  -Dtests.seed=F057BA3F9E752B32 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=uk-UA 
-Dtests.timezone=Indian/Mahe -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 1666 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro]   4/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro] git checkout 25e7631b9014a5d0729be7926313c498df1dc606

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 649 - Still Unstable

2018-06-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/649/

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_stored_idx

Error Message:
Some docs had errors -- check logs expected:<0> but was:<19>

Stack Trace:
java.lang.AssertionError: Some docs had errors -- check logs expected:<0> but 
was:<19>
at 
__randomizedtesting.SeedInfo.seed([820F568032A76876:882807A04DD1CB21]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:342)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_stored_idx(TestStressCloudBlindAtomicUpdates.java:221)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[

[jira] [Commented] (SOLR-12499) Add reduce operation to merge field values

2018-06-22 Thread Christian Spitzlay (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520085#comment-16520085
 ] 

Christian Spitzlay commented on SOLR-12499:
---

Sorry, the flu has knocked me out for some days.

And now I will be away for a couple of weeks. Markus Kalkbrenner may take over 
and go forward with this. He has write access to my github repository.
 * The current implementation just copies the values from the first document in 
the group (The sort order may still be something to check, it seems to be 
backwards w.r.t. what I would have expected). I kept that behaviour the way it 
was in GroupOperation.
 * I can imagine a case where the different documents share some values but not 
others.  For the shared values the user would end up with an array of identical 
values, one from each document which would make handling the result harder. 
Maybe one could provide a list of fields to be merged instead of a single one?
 * The name "flatten" makes sense if one thinks of a group of documents being 
reduced to a single one but I think that is already expressed by the "reduce". 
It think the name of the group operation should specifically express what is 
special about the way it treats the values. One could imagine operations that 
calculate min, max, sum, ... of a given field, for example (although that can 
already be done with rollup, if I understand correctly). The new operation 
merges values from the single documents (arrays or scalars) into an array under 
the same name. So mergeFieldValue seemed straightforward to me. Maybe there's a 
better name of course but I think flatten does not explain what the function 
does at least not how I understand the word flatten.

 

> Add reduce operation to merge field values
> --
>
> Key: SOLR-12499
> URL: https://issues.apache.org/jira/browse/SOLR-12499
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.3.1
>Reporter: Christian Spitzlay
>Priority: Major
>
> It would be nice to have a reduce operation that lets you collect the values 
> of a field from the group of tuples into an array.
> {code:java}
> Something that transforms
> {
>  "k1": "1",
>  "k2": ["a", "b"]
> },
> {
>  "k1": "2",
>  "k2": ["c", "d"]
> },
> {
>  "k1": "2",
>  "k2": ["e", "f"]
> }
> into
> {
>  "k1": "1",
>  "k2": ["a", "b"]
> },
> {
>  "k1": "2",
>  "k2": ["c", "d", "e", "f"]
> }
> {code}
> and
> {code:java}
> {
>  "k1": "1",
>  "k2": "a"
> },
> {
>  "k1": "2",
>  "k2": "b"
> },
> {
>  "k1": "2",
>  "k2": "c"
> }
> into
> {
>  "k1": "1",
>  "k2": ["a"]
> },
> {
>  "k1": "2",
>  "k2": ["b", "c"]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 7.4.0 RC1

2018-06-22 Thread Adrien Grand
This vote has passed. Thanks everyone for voting, I will proceed with
releasing these bits.

Le mer. 20 juin 2018 à 22:15, Kevin Risden  a écrit :

> +1
> SUCCESS! [1:59:46.135376]
>
> Kevin Risden
>
> On Wed, Jun 20, 2018 at 11:30 AM, Varun Thacker  wrote:
>
>> +1
>> SUCCESS! [2:53:31.027487]
>>
>> On Wed, Jun 20, 2018 at 11:22 AM, Christian Moen  wrote:
>>
>>> +1
>>> SUCCESS! [1:29:55.531758]
>>>
>>>
>>> On Tue, Jun 19, 2018 at 5:27 AM Adrien Grand  wrote:
>>>
 Please vote for release candidate 1 for Lucene/Solr 7.4.0

 The artifacts can be downloaded from:

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.4.0-RC1-rev9060ac689c270b02143f375de0348b7f626adebc

 You can run the smoke tester directly with this command:

 python3 -u dev-tools/scripts/smokeTestRelease.py \

 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.4.0-RC1-rev9060ac689c270b02143f375de0348b7f626adebc


 
 Here’s my +1
 SUCCESS! [0:48:15.228535]

>>>
>>
>