[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192003368
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
+List all = flatten(solrDoc);
 
-  @Override
-  public boolean hasNext() {
-return iter.hasNext();
-  }
+String rootId = getHashableId();
 
-  @Override
-  public Document next() {
-return DocumentBuilder.toDocument(iter.next(), req.getSchema());
-  }
+boolean isVersion = version != 0;
 
-  @Override
-  public void remove() {
-throw new UnsupportedOperationException();
+for (SolrInputDocument sdoc : all) {
+  if (all.size() > 1) {
+sdoc.setField(IndexSchema.ROOT_FIELD_NAME, rootId);
   }
-};
+  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
+  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
+  // then we could add this field to the generated lucene document 
instead.
+}
+return all;
   }
 
   private List flatten(SolrInputDocument root) {
 List unwrappedDocs = new ArrayList<>();
-recUnwrapp(unwrappedDocs, root);
+recUnwrapAnonymous(unwrappedDocs, root, true);
+recUnwrapRelations(unwrappedDocs, root, true);
 if (1 < unwrappedDocs.size() && ! 
req.getSchema().isUsableForChildDocs()) {
   throw new SolrException
 (SolrException.ErrorCode.BAD_REQUEST, "Unable to index docs with 
children: the schema must " +
  "include definitions for both a uniqueKey field and the '" + 
IndexSchema.ROOT_FIELD_NAME +
  "' field, using the exact same fieldType");
 }
+unwrappedDocs.add(root);
 return unwrappedDocs;
   }
 
-  private void recUnwrapp(List unwrappedDocs, 
SolrInputDocument currentDoc) {
+  /** Extract all child documents from parent that are saved in keys. */
+  private void recUnwrapRelations(List unwrappedDocs, 
SolrInputDocument currentDoc, boolean isRoot) {
+for (SolrInputField field: currentDoc.values()) {
+  Object value = field.getFirstValue();
+  // check if value is a childDocument
+  if (value instanceof SolrInputDocument) {
+Object val = field.getValue();
+if (!(val instanceof Collection)) {
+  recUnwrapRelations(unwrappedDocs, ((SolrInputDocument) val));
+  continue;
+}
+Collection childrenList = ((Collection) val);
+for (SolrInputDocument child : childrenList) {
+  recUnwrapRelations(unwrappedDocs, child);
+}
+  }
+}
+
--- End diff --

Yeah guess an exception should be thrown


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192000457
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -146,17 +146,14 @@ public String getPrintableId() {
  return "(null)";
}
 
-  /**
-   * @return String id to hash
-   */
-  public String getHashableId() {
+  public String getHashableId(SolrInputDocument doc) {
--- End diff --

I changed it for something else that was scraped.
I'll revert the changes


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191999062
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
+List all = flatten(solrDoc);
 
-  @Override
-  public boolean hasNext() {
-return iter.hasNext();
-  }
+String rootId = getHashableId();
 
-  @Override
-  public Document next() {
-return DocumentBuilder.toDocument(iter.next(), req.getSchema());
-  }
+boolean isVersion = version != 0;
 
-  @Override
-  public void remove() {
-throw new UnsupportedOperationException();
+for (SolrInputDocument sdoc : all) {
+  if (all.size() > 1) {
--- End diff --

Previously flatten was not called if there were no child documents, since 
isBlock() would return false in 
[DirectUpdateHandler2](https://github.com/apache/lucene-solr/pull/385/files#diff-ebdc4ecf6a2398f102ba7fae37648d10L976).
 If we remove this condition even documents without any children will have 
__root__ added to them, which is not the case beforehand.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 726 - Unstable

2018-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/726/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1554/consoleText

[repro] Revision: 6bbce38b77d5850f2d62d62fe87254e2ac8bd447

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=DeleteStatusTest 
-Dtests.method=testProcessAndWaitDeletesAsyncIds -Dtests.seed=5E2476F8ABCCD5E8 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=America/Cayman -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsCollectionsAPIDistributedZkTest 
-Dtests.method=testCreateShouldFailOnExistingCore -Dtests.seed=5E2476F8ABCCD5E8 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-TN -Dtests.timezone=America/Scoresbysund 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWaitForStateWatcherIsRetainedOnPredicateFailure 
-Dtests.seed=ACD4CC9F505788AB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ca -Dtests.timezone=Asia/Tel_Aviv -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d243f35a5480163fb02e1d36541bf115cec35172
[repro] git fetch
[repro] git checkout 6bbce38b77d5850f2d62d62fe87254e2ac8bd447

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro]solr/core
[repro]   DeleteStatusTest
[repro]   HdfsCollectionsAPIDistributedZkTest
[repro] ant compile-test

[...truncated 2451 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=ACD4CC9F505788AB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ca -Dtests.timezone=Asia/Tel_Aviv -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 2017 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1329 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.DeleteStatusTest|*.HdfsCollectionsAPIDistributedZkTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=5E2476F8ABCCD5E8 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=America/Cayman -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 204 lines...]
[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.DeleteStatusTest
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout d243f35a5480163fb02e1d36541bf115cec35172

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1555 - Still Unstable

2018-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1555/

3 tests failed.
FAILED:  org.apache.solr.cloud.TestStressLiveNodes.testStress

Error Message:
iter3376: [127.0.0.1:33133_solr, thrasher-T3375_1-0, thrasher-T3375_1-1, 
thrasher-T3375_1-2, thrasher-T3375_1-3, thrasher-T3375_1-4] expected:<1> but 
was:<6>

Stack Trace:
java.lang.AssertionError: iter3376: [127.0.0.1:33133_solr, thrasher-T3375_1-0, 
thrasher-T3375_1-1, thrasher-T3375_1-2, thrasher-T3375_1-3, thrasher-T3375_1-4] 
expected:<1> but was:<6>
at 
__randomizedtesting.SeedInfo.seed([E9995B841F2A19F5:FC8226459CDDE18E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestStressLiveNodes.testStress(TestStressLiveNodes.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 231 - Unstable

2018-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/231/

14 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([F6BF429EE676516E:7EEB7D44488A3C96]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1302)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:731)
at 
org.apache.solr.common.cloud.ClusterState$CollectionRef.get(ClusterState.java:386)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1208)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1591)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingDocPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:586)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carro

[jira] [Commented] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-30 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496130#comment-16496130
 ] 

Shawn Heisey commented on SOLR-12297:
-

[~markrmil...@gmail.com] are you ready for feedback and help on starburst yet, 
or are there things you want to get committed before we put a lot of effort in? 
 Would this issue be an appropriate place to discuss it, or would you prefer 
the dev list?

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-05-30 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496119#comment-16496119
 ] 

Gus Heck commented on SOLR-11654:
-

This has been dangling a bit too long, so attaching what I have so far. I feel 
pretty good that I've got code in place that selects an appropriate shard, and 
things seem to not break, but trying to write a test for this code has been a 
mess... I've been lost in the weeds chasing the notion that 
TrackingShardHandlerFactory could be used to track which shards requests were 
sent to, but based on everything I can find updates never touch shardHandlers 
so that's a dead end. The in-code documentation on ShardRequest and 
ShardHandler, and ShardHandlerFactory and pretty much everything having 
anything to do with shard requests is virtually non existent. HttpShardHandler 
has the seemingly odd property that many of the request making methods are on 
the factory, not the object produced by the factory... In any case I think 
something analogous to TrackingShardHandlerFactory for tracking updates is 
required to properly test this, probably a custom URP, though it needs to be 
configured after DistributedUpdateProcessor to ensure it isn't skipped on the 
sub-requests.

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-05-30 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-11654:

Attachment: SOLR-11654.patch

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Star Burst - SolrCloud Performance / Scale

2018-05-30 Thread Mark Miller
>
> If I just very slowly put it in piece by piece and tried to pre think out
> every step, the results would be pretty dreary.
>

To elaborate on that, there probably would not have been results from me :)

I almost quit in the middle of Jetty HttpClient. I relearned every mistake
I made trying to do the proxy the first time 6 times and then made some new
ones. The security and SSL part are still going to take some grunt work.

I almost quit in the middle of Http2. I hadn't signed up for this. But I
was in too far by then, too much invested.

By the QOSFilter, that was a nice change of pace, but  it's just an early
prototype.

It's one of those things that just doesn't happen until some idiot bites
off more than he can chew. Painful to break up much initially, too general
to pull lots of payed devs, too much for one dev.

I've been hunting down thread pools and bad resource use in general as well
(still clearing out sleeps, focusing on non test code first, but some test
code too). I'd like to get that in shape and then start enforcing checks
and tests around it. A lot of that can probably come in independently.

- Mark


-- 
- Mark
about.me/markrmiller


Re: Solr Star Burst - SolrCloud Performance / Scale

2018-05-30 Thread Mark Miller
On Wed, May 30, 2018 at 10:18 PM Varun Thacker  wrote:

> Hi Mark,
>
> I've started glancing at the the repo and some of the issues you are
> addressing here will make things a lot more stable under high loads. I'll
> look at it in a little more detail in the coming days.
>
> The key would be how to isolate the work in desecrate chunks to then go
> and make Jiras for. SOLR-12405 is the first thing that caught my eye that's
> an isolated jira and can be tackled without the http2 client etc
>

Yeah, anything that does not depend on the Jetty HttpClient or HTTP/2 can
likely be brought in independently.

The Http2SolrClient can also come in without HTTP/2 or replacing
HttpSolrClient and still offer non blocking IO async as a new HTTP/1.1
capable user client.

I guess I have maybe 3 JIRA issues filed - Http2SolrClient w/ Jetty
HttpClient, HTTP/2, QOSFilter. That covers the foundation.

As I have gained access to these features though, all of a sudden it
becomes easier to debug and solve other issues. I also learn and discover
by pushing down the road. If I just very slowly put it in piece by piece
and tried to pre think out every step, the results would be pretty dreary.
I would not be anywhere near the current state or have the same
understanding of what still needs to be done. Like SolrCloud originally,
the scope of change is just too large for standard procedure. We had to
fork that too and the merge back was huge and scary, but also would have
only been on master.

So I'll do what I can to keep the branch up to date and we will have to
pull off bitable pieces, with both HTTP/2 and Jetty HttpClient just being
big and invasive no matter what, but almost all for the better :)

As soon as anyone is ready to collaborate concretely on code, let me know
and I'll finish getting a base set of tests basing and move the branch to
Apache.

- Mark
-- 
- Mark
about.me/markrmiller


[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191982413
  
--- Diff: solr/core/src/test/org/apache/solr/update/AddBlockUpdateTest.java 
---
@@ -260,10 +266,90 @@ public void testExceptionThrown() throws Exception {
 assertQ(req(parent + ":Y"), "//*[@numFound='0']");
 assertQ(req(parent + ":W"), "//*[@numFound='0']");
   }
-  
+
+  @Test
+  public void testSolrNestedFieldsList() throws Exception {
+
+final String id1 = id();
+List children1 = Arrays.asList(sdoc("id", id(), 
child, "y"), sdoc("id", id(), child, "z"));
+
+SolrInputDocument document1 = sdoc("id", id1, parent, "X",
+"children", children1);
+
+final String id2 = id();
+List children2 = Arrays.asList(sdoc("id", id(), 
child, "b"), sdoc("id", id(), child, "c"));
+
+SolrInputDocument document2 = sdoc("id", id2, parent, "A",
+"children", children2);
+
+List docs = Arrays.asList(document1, document2);
+
+indexSolrInputDocumentsDirectly(docs);
+
+final SolrIndexSearcher searcher = getSearcher();
+assertJQ(req("q","*:*",
+"fl","*",
+"sort","id asc",
+"wt","json"),
+"/response/numFound==" + "XyzAbc".length());
+assertJQ(req("q",parent+":" + document2.getFieldValue(parent),
+"fl","*",
+"sort","id asc",
+"wt","json"),
+"/response/docs/[0]/id=='" + document2.getFieldValue("id") + "'");
+assertQ(req("q",child+":(y z b c)", "sort","_docid_ asc"),
+"//*[@numFound='" + "yzbc".length() + "']", // assert physical 
order of children
+"//doc[1]/arr[@name='child_s']/str[text()='y']",
+"//doc[2]/arr[@name='child_s']/str[text()='z']",
+"//doc[3]/arr[@name='child_s']/str[text()='b']",
+"//doc[4]/arr[@name='child_s']/str[text()='c']");
+assertSingleParentOf(searcher, one("bc"), "A");
+assertSingleParentOf(searcher, one("yz"), "X");
+  }
+
+  @Test
+  public void testSolrNestedFieldsSingleVal() throws Exception {
+SolrInputDocument document1 = sdoc("id", id(), parent, "X",
+"child1_s", sdoc("id", id(), "child_s", "y"),
+"child2_s", sdoc("id", id(), "child_s", "z"));
+
+SolrInputDocument document2 = sdoc("id", id(), parent, "A",
+"child1_s", sdoc("id", id(), "child_s", "b"),
+"child2_s", sdoc("id", id(), "child_s", "c"));
+
+List docs = new ArrayList() {
--- End diff --

remember to use Arrays.asList


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191975716
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -146,17 +146,14 @@ public String getPrintableId() {
  return "(null)";
}
 
-  /**
-   * @return String id to hash
-   */
-  public String getHashableId() {
+  public String getHashableId(SolrInputDocument doc) {
--- End diff --

why did you change the method signature to be overloaded and take the solr 
doc?  It's only called once.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191983681
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
+List all = flatten(solrDoc);
 
-  @Override
-  public boolean hasNext() {
-return iter.hasNext();
-  }
+String rootId = getHashableId();
 
-  @Override
-  public Document next() {
-return DocumentBuilder.toDocument(iter.next(), req.getSchema());
-  }
+boolean isVersion = version != 0;
 
-  @Override
-  public void remove() {
-throw new UnsupportedOperationException();
+for (SolrInputDocument sdoc : all) {
+  if (all.size() > 1) {
+sdoc.setField(IndexSchema.ROOT_FIELD_NAME, rootId);
   }
-};
+  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
+  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
+  // then we could add this field to the generated lucene document 
instead.
+}
+return all;
   }
 
   private List flatten(SolrInputDocument root) {
--- End diff --

note that if cmd.isInplaceUpdate(), then there is no flattening to be done 
-- or at least, it's an error.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191976059
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
+List all = flatten(solrDoc);
 
-  @Override
-  public boolean hasNext() {
-return iter.hasNext();
-  }
+String rootId = getHashableId();
 
-  @Override
-  public Document next() {
-return DocumentBuilder.toDocument(iter.next(), req.getSchema());
-  }
+boolean isVersion = version != 0;
 
-  @Override
-  public void remove() {
-throw new UnsupportedOperationException();
+for (SolrInputDocument sdoc : all) {
+  if (all.size() > 1) {
--- End diff --

Previously there was no condition around adding the root field so why start 
now?  I mean... I can understand why _not_ to but with this overall issue 
(SOLR-12361) I see no point in disturbing this.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191976784
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
--- End diff --

Ok I confess I suggested this name but now I have regrets seeing flatten(). 
 Perhaps computeFinalFlattenedSolrDocs() to convey there is no further 
manipulation of the docs.  And document of course.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191978869
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -417,7 +417,8 @@ private void addAndDelete(AddUpdateCommand cmd, 
List deletesAfter
   }
 
   private Term getIdTerm(AddUpdateCommand cmd) {
--- End diff --

Latest diff shows this logic calls cmd.computeFlattenedDocs() and we 
definitely don't want this method computing that!
Now that I look at the code in my IDE, I can appreciate that this is a bit 
of a tricky issue though, since neither callers of getIdTerm yet have the 
List to give to this method.  H.   Maybe 
`updateDocument` should not take an updateTerm as an argument but should 
instead figure it out and return it?  Note that if isInPlaceUpdate, the 
updateTerm will always be the unique key field; you needn't check for a list of 
docs.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191977537
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
+List all = flatten(solrDoc);
 
-  @Override
-  public boolean hasNext() {
-return iter.hasNext();
-  }
+String rootId = getHashableId();
 
-  @Override
-  public Document next() {
-return DocumentBuilder.toDocument(iter.next(), req.getSchema());
-  }
+boolean isVersion = version != 0;
 
-  @Override
-  public void remove() {
-throw new UnsupportedOperationException();
+for (SolrInputDocument sdoc : all) {
+  if (all.size() > 1) {
+sdoc.setField(IndexSchema.ROOT_FIELD_NAME, rootId);
   }
-};
+  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
+  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
+  // then we could add this field to the generated lucene document 
instead.
+}
+return all;
   }
 
   private List flatten(SolrInputDocument root) {
 List unwrappedDocs = new ArrayList<>();
-recUnwrapp(unwrappedDocs, root);
+recUnwrapAnonymous(unwrappedDocs, root, true);
+recUnwrapRelations(unwrappedDocs, root, true);
 if (1 < unwrappedDocs.size() && ! 
req.getSchema().isUsableForChildDocs()) {
--- End diff --

based on your movement of where the final doc is added, I think this 
condition is now wrong.  1 should be 0.  Or move the condition.  Maybe move it 
to the caller, or to the very end of this method.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191980588
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -973,16 +976,43 @@ private void updateDocOrDocValues(AddUpdateCommand 
cmd, IndexWriter writer, Term
   }
 
   private void updateDocument(AddUpdateCommand cmd, IndexWriter writer, 
Term updateTerm) throws IOException {
+List docs = cmd.getDocsList();
+
 if (cmd.isBlock()) {
-  log.debug("updateDocuments({})", cmd);
-  writer.updateDocuments(updateTerm, cmd);
+  log.debug("updateDocuments({})", docs);
+  writer.updateDocuments(updateTerm, toDocumentsIter(docs, 
cmd.req.getSchema()));
 } else {
   Document luceneDocument = cmd.getLuceneDocument(false);
   log.debug("updateDocument({})", cmd);
   writer.updateDocument(updateTerm, luceneDocument);
 }
   }
 
+  private Iterable toDocumentsIter(Collection 
docs, IndexSchema schema) {
--- End diff --

I think `Iterables.transform` is a little leaner than indirectly getting 
there via FluentIterable.  I don't think FluentIterable is more concise here.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191978497
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -175,69 +172,83 @@ public String getHashableId() {
 return id;
   }
 
-  public boolean isBlock() {
-return solrDoc.hasChildDocuments();
+  /**
+   * @return String id to hash
+   */
+  public String getHashableId() {
+return getHashableId(solrDoc);
   }
 
-  @Override
-  public Iterator iterator() {
-return new Iterator() {
-  Iterator iter;
-
-  {
-List all = flatten(solrDoc);
-
-String idField = getHashableId();
-
-boolean isVersion = version != 0;
-
-for (SolrInputDocument sdoc : all) {
-  sdoc.setField(IndexSchema.ROOT_FIELD_NAME, idField);
-  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
-  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
-  // then we could add this field to the generated lucene document 
instead.
-}
-
-iter = all.iterator();
- }
+  public List computeFlattenedDocs() {
+List all = flatten(solrDoc);
 
-  @Override
-  public boolean hasNext() {
-return iter.hasNext();
-  }
+String rootId = getHashableId();
 
-  @Override
-  public Document next() {
-return DocumentBuilder.toDocument(iter.next(), req.getSchema());
-  }
+boolean isVersion = version != 0;
 
-  @Override
-  public void remove() {
-throw new UnsupportedOperationException();
+for (SolrInputDocument sdoc : all) {
+  if (all.size() > 1) {
+sdoc.setField(IndexSchema.ROOT_FIELD_NAME, rootId);
   }
-};
+  if(isVersion) sdoc.setField(CommonParams.VERSION_FIELD, version);
+  // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
+  // then we could add this field to the generated lucene document 
instead.
+}
+return all;
   }
 
   private List flatten(SolrInputDocument root) {
 List unwrappedDocs = new ArrayList<>();
-recUnwrapp(unwrappedDocs, root);
+recUnwrapAnonymous(unwrappedDocs, root, true);
+recUnwrapRelations(unwrappedDocs, root, true);
 if (1 < unwrappedDocs.size() && ! 
req.getSchema().isUsableForChildDocs()) {
   throw new SolrException
 (SolrException.ErrorCode.BAD_REQUEST, "Unable to index docs with 
children: the schema must " +
  "include definitions for both a uniqueKey field and the '" + 
IndexSchema.ROOT_FIELD_NAME +
  "' field, using the exact same fieldType");
 }
+unwrappedDocs.add(root);
 return unwrappedDocs;
   }
 
-  private void recUnwrapp(List unwrappedDocs, 
SolrInputDocument currentDoc) {
+  /** Extract all child documents from parent that are saved in keys. */
+  private void recUnwrapRelations(List unwrappedDocs, 
SolrInputDocument currentDoc, boolean isRoot) {
+for (SolrInputField field: currentDoc.values()) {
+  Object value = field.getFirstValue();
+  // check if value is a childDocument
+  if (value instanceof SolrInputDocument) {
+Object val = field.getValue();
+if (!(val instanceof Collection)) {
+  recUnwrapRelations(unwrappedDocs, ((SolrInputDocument) val));
+  continue;
+}
+Collection childrenList = ((Collection) val);
+for (SolrInputDocument child : childrenList) {
+  recUnwrapRelations(unwrappedDocs, child);
+}
+  }
+}
+
--- End diff --

maybe add a check that these field keyed relations have no anonymous 
children?  It's quick to check.  The reverse shouldn't be allowed either but I 
wouldn't bother enforcing that as it's a bit more work.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-30 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496048#comment-16496048
 ] 

David Smiley commented on SOLR-12361:
-

[~caomanhdat] can you please review the changes to 
IgnoreLargeDocumentProcessorFactory that mosh changed in PR 385?  (note it 
includes more tests).  I have and it looks good though I admit I'm confused by 
this URP as to when exactly it's doing a "primitiveEstimate" vs doing a 
"fastEstimate" (that isn't necessarily primitive).  Two of the instanceof 
dispatches look similar (fastEstimate(obj) and inside the loop of 
fastEstimate(map ).  Hey BTW line 115 could be changed to loop on the 
SolrInputFields instead of the field names to avoid doing an internal hashmap 
lookup on all field names.

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Star Burst - SolrCloud Performance / Scale

2018-05-30 Thread Varun Thacker
Hi Mark,

I've started glancing at the the repo and some of the issues you are
addressing here will make things a lot more stable under high loads. I'll
look at it in a little more detail in the coming days.

The key would be how to isolate the work in desecrate chunks to then go and
make Jiras for. SOLR-12405 is the first thing that caught my eye that's an
isolated jira and can be tackled without the http2 client etc

On Wed, May 30, 2018 at 4:13 PM, Mark Miller  wrote:

> Some of the fallout of this should be huge improvements to our tests.
> Right now, some of them take so long because no one even notices when they
> have done things to make the situation even worse and it's hard to monitor
> resource usage as we develop with it already fairly unbounded.
>
> On master right now, on a lucky run (no tlog replica type for sure),
> BasicDistributedZkTest takes my 6 core machine from 2012 takes 76 seconds.
> Depending on how hard test injection hits, I've seen a few minutes and
> anywhere in between.
>
> Setting the tlog replica issue aside (I've disabled it for the moment, but
> I have fixed that issue by changing out distrib commits work), on the
> starburst branch, resource usage with multiple parallel tests running is
> going to be much, much better. For single cloud tests, performance is
> mostly about removing naive polling and carefree resource usage. The branch
> has big improvements for single and parallel tests already.
>
> I don't know how much left there is to fix, but already, on starburst,
> BasicDistributedZkTest takes 45 seconds vs master's 76 best case.
>
> - Mark
>
> On Wed, May 30, 2018 at 1:52 PM Mark Miller  wrote:
>
>> I've always said I wanted to focus on performance and scale for
>> SolrCloud, but for a long time that really just involved focusing on
>> stability.
>>
>> Now things have started to get pretty stable. Some things that made me
>> cringe about SolrCloud no longer do in 7.3/7.4.
>>
>> Weeks back I found myself yet again looking for spurious, ugly issues
>> around fragile connections that cause recovery headaches and random request
>> fails. Again I made a change that should bring big improvements. Like many
>> times before.
>>
>> I've had just about enough of that. Just about enough of broken
>> connection reuse. Just about enough of countless wasteful threads and
>> connections lurking and creaking all over. Just about enough of poor single
>> update performance and weaknesses in batch updates. Just about enough of
>> the painful ConcurrentUpdateSolrClient.
>>
>> So much inefficiency hiding in plain sight. Stuff I always thought we
>> would overcome, but always far enough in the distance to keep me from
>> feeling bad that I didn't know quite how we would get there. Solr was a
>> container agnostic web application before Solr 5 for god's sake. Even
>> relatively simple changes like upgrading our http client from version 3 to
>> 4 was a huge amount of work for very incremental improvements.
>>
>> If I'm going to be excited about this system after all these years all of
>> that has to change.
>>
>> I started looking into using a HTTP/2 and a new HttpClient that can do
>> non blocking IO async requests.
>>
>> I thought upgrading Apache HttpClient from 3 to 4 was long, tedious, and
>> difficult. Going to a fully different client has made me reconsider that. I
>> did a lot of the work, but a good amount remains (security, finish SSL,
>> tuning ...).
>>
>> I wrote a new Http2SolrClient that can replace HttpSolrClient and plug
>> into CloudSolrClient and LBHttpSolrClient. I added some early async APIs.
>> Non blocking IO async is about as oversold as "schemaless", but it's a
>> great tool to have available as well.
>>
>> I'm now working in a much more efficient world, aiming for 1 connection
>> per CoreContainer per remote destination. Connections are no longer
>> fragile. The transfer protocol is no longer text based.
>>
>> Yonik should be pleased with the state of reordered updates from leader
>> to replica.
>>
>> I replaced our CUSC usage for distributing updates with Http2SolrClient
>> and async calls.
>>
>> I played with optionally using the async calls in the HttpShardHandler as
>> well.
>>
>> I replaced all HttpSolrClient usage with Http2SolrClient.
>>
>> I started to get control of threads. I had control of connections.
>>
>> I added early efficient external request throttling.
>>
>> I started tuning resource pools.
>>
>> I started removing sleep polling loops. They are horrible and slow tests
>> especially, we already have a replacement we are hardly using.
>>
>> I did some other related stuff. I'm just fixing the main things I hate
>> along these communication/resource-usage/scale/perf themes.
>>
>> I'm calling this whole effort Star Burst: https://github.com/
>> markrmiller/starburst
>>
>> I've done a ton. Mostly very late at night, it's not all perfect yet,
>> some of it may be exploratory. There is a lot to do to wrap it up with a
>> bow. This touches a lot of spots,

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22142 - Unstable!

2018-05-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22142/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic

Error Message:
one expected:<101> but was:<100>

Stack Trace:
java.lang.AssertionError: one expected:<101> but was:<100>
at 
__randomizedtesting.SeedInfo.seed([A9A212C3DF1C8B6:A1603C39E22D4E98]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic(SolrRrdBackendFactoryTest.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13510 lines...]
   [junit4] Suite: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
   [junit4]   2> 720802 INFO  
(SUIT

[jira] [Commented] (LUCENE-8264) Allow an option to rewrite all segments

2018-05-30 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495893#comment-16495893
 ] 

Erick Erickson commented on LUCENE-8264:


Sorry, been away for a while

[~janhoy] re: UninvertDocValuesMergePolicyFactory. True, but wouldn't it be 
nice from an ops perspective to just be able to do this as a single operation?

[~simonw] Thanks for the pointer, I'll look at this when I have a bit more 
breather. I think you're right, this is probably a Solr/ES issue in terms of 
making it convenient from an admin basis. I suspect it'll be something like an 
API command that does the wrapping magic. How to make it maximally convenient 
is something we'll wrestle to the ground.

> Allow an option to rewrite all segments
> ---
>
> Key: LUCENE-8264
> URL: https://issues.apache.org/jira/browse/LUCENE-8264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> For the background, see SOLR-12259.
> There are several use-cases that would be much easier, especially during 
> upgrades, if we could specify that all segments get rewritten. 
> One example: Upgrading 5x->6x->7x. When segments are merged, they're 
> rewritten into the current format. However, there's no guarantee that a 
> particular segment _ever_ gets merged so the 6x-7x upgrade won't necessarily 
> be successful.
> How many merge policies support this is an open question. I propose to start 
> with TMP and raise other JIRAs as necessary for other merge policies.
> So far the usual response has been "re-index from scratch", but that's 
> increasingly difficult as systems get larger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-30 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495878#comment-16495878
 ] 

Alexandre Rafalovitch commented on SOLR-10299:
--

If we are getting a VM, there is nothing that stops that VM have a proxy in 
front of Solr that rejects everything but one route (/select or /manual or 
/manual730) from outside of localhost.And rejects anything that has '..' or 
other magic characters. And has CORS headers locking the - browser-originated - 
requests to the website hosting official manual only.

The Solr itself can also be configured to not even have Update Request Handler. 
The building of the index can be handled by the ant script that builds Solr 
itself, given that Solr and the documentation and - whatever - is all in one 
branch now. The index - built locally - can then be secure FTPd to the VM and 
an index swap command can be called by a watcher script.

Or if Docker is supported, maybe - after the container gets built - it can be 
mounted read-only and only the latest manual is searchable.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: basic-services-diagram.png
>
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-30 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495868#comment-16495868
 ] 

Shawn Heisey commented on SOLR-10299:
-

[~gstein]. the search capability for the reference guide needs to available to 
everyone.  It's our online documentation.  We can't limit it to just committers.

Within our project, there's no shortage of expertise in making Solr do amazing 
things.  But for the kind of setup that Infra provides for project web pages, 
running and accessing a service like Solr is extremely difficult. It is *Solr* 
that we don't want to expose to the Internet.  Solr shouldn't be accessible to 
*anyone* outside of trusted admins and the servers that will send updates and 
make queries on behalf of users.

Here's a silly diagram of how it would normally look:

 !basic-services-diagram.png!

The current search capability in the reference guide is javascript, so it runs 
in the user's browser and accesses a json file with the search data that's 
generated along with the reference guide.  At the moment, it can only search 
page titles and doesn't have full-text search capability.  I think that our 
best bet is probably to continue that general approach, but change the search 
library to one that can do the job better.

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: basic-services-diagram.png
>
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10299) Provide search for online Ref Guide

2018-05-30 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-10299:

Attachment: basic-services-diagram.png

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: basic-services-diagram.png
>
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Star Burst - SolrCloud Performance / Scale

2018-05-30 Thread Mark Miller
Some of the fallout of this should be huge improvements to our tests. Right
now, some of them take so long because no one even notices when they have
done things to make the situation even worse and it's hard to monitor
resource usage as we develop with it already fairly unbounded.

On master right now, on a lucky run (no tlog replica type for sure),
BasicDistributedZkTest takes my 6 core machine from 2012 takes 76 seconds.
Depending on how hard test injection hits, I've seen a few minutes and
anywhere in between.

Setting the tlog replica issue aside (I've disabled it for the moment, but
I have fixed that issue by changing out distrib commits work), on the
starburst branch, resource usage with multiple parallel tests running is
going to be much, much better. For single cloud tests, performance is
mostly about removing naive polling and carefree resource usage. The branch
has big improvements for single and parallel tests already.

I don't know how much left there is to fix, but already, on starburst,
BasicDistributedZkTest takes 45 seconds vs master's 76 best case.

- Mark

On Wed, May 30, 2018 at 1:52 PM Mark Miller  wrote:

> I've always said I wanted to focus on performance and scale for SolrCloud,
> but for a long time that really just involved focusing on stability.
>
> Now things have started to get pretty stable. Some things that made me
> cringe about SolrCloud no longer do in 7.3/7.4.
>
> Weeks back I found myself yet again looking for spurious, ugly issues
> around fragile connections that cause recovery headaches and random request
> fails. Again I made a change that should bring big improvements. Like many
> times before.
>
> I've had just about enough of that. Just about enough of broken connection
> reuse. Just about enough of countless wasteful threads and connections
> lurking and creaking all over. Just about enough of poor single update
> performance and weaknesses in batch updates. Just about enough of the
> painful ConcurrentUpdateSolrClient.
>
> So much inefficiency hiding in plain sight. Stuff I always thought we
> would overcome, but always far enough in the distance to keep me from
> feeling bad that I didn't know quite how we would get there. Solr was a
> container agnostic web application before Solr 5 for god's sake. Even
> relatively simple changes like upgrading our http client from version 3 to
> 4 was a huge amount of work for very incremental improvements.
>
> If I'm going to be excited about this system after all these years all of
> that has to change.
>
> I started looking into using a HTTP/2 and a new HttpClient that can do non
> blocking IO async requests.
>
> I thought upgrading Apache HttpClient from 3 to 4 was long, tedious, and
> difficult. Going to a fully different client has made me reconsider that. I
> did a lot of the work, but a good amount remains (security, finish SSL,
> tuning ...).
>
> I wrote a new Http2SolrClient that can replace HttpSolrClient and plug
> into CloudSolrClient and LBHttpSolrClient. I added some early async APIs.
> Non blocking IO async is about as oversold as "schemaless", but it's a
> great tool to have available as well.
>
> I'm now working in a much more efficient world, aiming for 1 connection
> per CoreContainer per remote destination. Connections are no longer
> fragile. The transfer protocol is no longer text based.
>
> Yonik should be pleased with the state of reordered updates from leader to
> replica.
>
> I replaced our CUSC usage for distributing updates with Http2SolrClient
> and async calls.
>
> I played with optionally using the async calls in the HttpShardHandler as
> well.
>
> I replaced all HttpSolrClient usage with Http2SolrClient.
>
> I started to get control of threads. I had control of connections.
>
> I added early efficient external request throttling.
>
> I started tuning resource pools.
>
> I started removing sleep polling loops. They are horrible and slow tests
> especially, we already have a replacement we are hardly using.
>
> I did some other related stuff. I'm just fixing the main things I hate
> along these communication/resource-usage/scale/perf themes.
>
> I'm calling this whole effort Star Burst:
> https://github.com/markrmiller/starburst
>
> I've done a ton. Mostly very late at night, it's not all perfect yet, some
> of it may be exploratory. There is a lot to do to wrap it up with a bow.
> This touches a lot of spots, our surface area of features is just huge now.
>
> Basically I have a high performance Solr fork at the moment (only setup
> for tests, not actually running stand alone Solr). I don't know how or when
> (or to be completely honest, if) it comes home. I'm going to do what I can,
> but it's likely to require more than me to be successful in a reasonable
> time frame.
>
> I have a couple JIRA issues open for HTTP/2 and the new SolrClient.
>
> Mark
>
>
> --
> - Mark
> about.me/markrmiller
>
-- 
- Mark
about.me/markrmiller


[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 230 - Still Failing

2018-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/230/

No tests ran.

Build Log:
[...truncated 24220 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2201 links (1756 relative) to 2974 anchors in 229 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[

[GitHub] lucene-solr pull request #388: Update package-info.java

2018-05-30 Thread yhcharles
GitHub user yhcharles opened a pull request:

https://github.com/apache/lucene-solr/pull/388

Update package-info.java

add a missing parenthesis

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yhcharles/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #388


commit b9debd0a9c0707dc492d8f72071e9518acbb1f27
Author: Charlie Yan 
Date:   2018-05-30T22:15:04Z

Update package-info.java

add a missing parenthesis




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8278) UAX29URLEmailTokenizer is not detecting some tokens as URL type

2018-05-30 Thread Junte Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495777#comment-16495777
 ] 

Junte Zhang commented on LUCENE-8278:
-

Hi Steve, sorry for the late response. I will check this tomorrow. Thanks for 
picking up this bug report! 

> UAX29URLEmailTokenizer is not detecting some tokens as URL type
> ---
>
> Key: LUCENE-8278
> URL: https://issues.apache.org/jira/browse/LUCENE-8278
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Junte Zhang
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-8278.patch
>
>
> We are using the UAX29URLEmailTokenizer so we can use the token types in our 
> plugins.
> However, I noticed that the tokenizer is not detecting certain URLs as  
> but  instead.
> Examples that are not working:
>  * example.com is 
>  * example.net is 
> But:
>  * https://example.com is 
>  * as is https://example.net
> Examples that work:
>  * example.ch is 
>  * example.co.uk is 
>  * example.nl is 
> I have checked this JIRA, and could not find an issue. I have tested this on 
> Lucene (Solr) 6.4.1 and 7.3.
> Could someone confirm my findings and advise what I could do to (help) 
> resolve this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8278) UAX29URLEmailTokenizer is not detecting some tokens as URL type

2018-05-30 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495719#comment-16495719
 ] 

Steve Rowe commented on LUCENE-8278:


I plan on committing this tomorrow if I don't get any feedback before then.

> UAX29URLEmailTokenizer is not detecting some tokens as URL type
> ---
>
> Key: LUCENE-8278
> URL: https://issues.apache.org/jira/browse/LUCENE-8278
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Junte Zhang
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-8278.patch
>
>
> We are using the UAX29URLEmailTokenizer so we can use the token types in our 
> plugins.
> However, I noticed that the tokenizer is not detecting certain URLs as  
> but  instead.
> Examples that are not working:
>  * example.com is 
>  * example.net is 
> But:
>  * https://example.com is 
>  * as is https://example.net
> Examples that work:
>  * example.ch is 
>  * example.co.uk is 
>  * example.nl is 
> I have checked this JIRA, and could not find an issue. I have tested this on 
> Lucene (Solr) 6.4.1 and 7.3.
> Could someone confirm my findings and advise what I could do to (help) 
> resolve this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12088) Shards with dead replicas cause increased write latency

2018-05-30 Thread Jerry Bao (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495708#comment-16495708
 ] 

Jerry Bao commented on SOLR-12088:
--

[~caomanhdat] I can't confirm or deny whether or not this has been fixed, but 
I'm happy with closing this out and reopening if we see it again.

> Shards with dead replicas cause increased write latency
> ---
>
> Key: SOLR-12088
> URL: https://issues.apache.org/jira/browse/SOLR-12088
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
>
> If a collection's shard contains dead replicas, write latency to the 
> collection is increased. For example, if a collection has 10 shards with a 
> replication factor of 3, and one of those shards contains 3 replicas and 3 
> downed replicas, write latency is increased in comparison to a shard that 
> contains only 3 replicas.
> My feeling here is that downed replicas should be completely ignored and not 
> cause issues to other alive replicas in terms of write latency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495655#comment-16495655
 ] 

Jan Høydahl commented on SOLR-10299:


{quote}But we wouldn't want it exposed to the open Internet,
{quote}
We definitely want the search to be public for all, but we'd like to avoid 
building and maintaining a server-side application to serve the search result 
page.

My recommendation is to pursue the in-browser JS search options mentioned. Then 
search follows each guide. With a hosted index we'd need to maintain separate 
indices for each released Solr version as well.

Jan

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-05-30 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Attachment: LUCENE-7976.patch

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-05-30 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495578#comment-16495578
 ] 

Erick Erickson commented on LUCENE-7976:


Iteration N+1. This one removes the horrible loop that concerned [~mikemccand], 
and good riddance to it. Also puts in all the rest of the changes so far.

2 out of 2,004 iterations of TestTieredMergePolicy.testPartialMerge failed 
because a forceMerge was specified with maxSegments != 1 that didn't produce 
the exact number of segments specified. I changed the test a bit to accommodate 
the fact that if we respect maxSegmentSize + 25% as an upper limit, then there 
are certainly some situations where the expected segment count will not be 
exactly what's specified. Is this acceptable? It's the packing problem.

And of course I thought that when the segment count _is_ 1 there should be no 
ambiguity so that's why two patches are uploaded so close to each other.

Meanwhile I'll run another couple of thousand iterations and the whole 
precommit/test cycle again.

Pending more comments I think we're close.

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-05-30 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Attachment: LUCENE-7976.patch

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 68 - Still Unstable

2018-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/68/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue.testDistributedQueue

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7D9D96624973904E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7D9D96624973904E]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeLostTriggerRestoreState

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([7D9D96624973904E:56624339D30B859E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeLostTriggerRestoreState(TestTriggerIntegration.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.

Solr Star Burst - SolrCloud Performance / Scale

2018-05-30 Thread Mark Miller
I've always said I wanted to focus on performance and scale for SolrCloud,
but for a long time that really just involved focusing on stability.

Now things have started to get pretty stable. Some things that made me
cringe about SolrCloud no longer do in 7.3/7.4.

Weeks back I found myself yet again looking for spurious, ugly issues
around fragile connections that cause recovery headaches and random request
fails. Again I made a change that should bring big improvements. Like many
times before.

I've had just about enough of that. Just about enough of broken connection
reuse. Just about enough of countless wasteful threads and connections
lurking and creaking all over. Just about enough of poor single update
performance and weaknesses in batch updates. Just about enough of the
painful ConcurrentUpdateSolrClient.

So much inefficiency hiding in plain sight. Stuff I always thought we would
overcome, but always far enough in the distance to keep me from feeling bad
that I didn't know quite how we would get there. Solr was a container
agnostic web application before Solr 5 for god's sake. Even relatively
simple changes like upgrading our http client from version 3 to 4 was a
huge amount of work for very incremental improvements.

If I'm going to be excited about this system after all these years all of
that has to change.

I started looking into using a HTTP/2 and a new HttpClient that can do non
blocking IO async requests.

I thought upgrading Apache HttpClient from 3 to 4 was long, tedious, and
difficult. Going to a fully different client has made me reconsider that. I
did a lot of the work, but a good amount remains (security, finish SSL,
tuning ...).

I wrote a new Http2SolrClient that can replace HttpSolrClient and plug into
CloudSolrClient and LBHttpSolrClient. I added some early async APIs. Non
blocking IO async is about as oversold as "schemaless", but it's a great
tool to have available as well.

I'm now working in a much more efficient world, aiming for 1 connection per
CoreContainer per remote destination. Connections are no longer fragile.
The transfer protocol is no longer text based.

Yonik should be pleased with the state of reordered updates from leader to
replica.

I replaced our CUSC usage for distributing updates with Http2SolrClient and
async calls.

I played with optionally using the async calls in the HttpShardHandler as
well.

I replaced all HttpSolrClient usage with Http2SolrClient.

I started to get control of threads. I had control of connections.

I added early efficient external request throttling.

I started tuning resource pools.

I started removing sleep polling loops. They are horrible and slow tests
especially, we already have a replacement we are hardly using.

I did some other related stuff. I'm just fixing the main things I hate
along these communication/resource-usage/scale/perf themes.

I'm calling this whole effort Star Burst:
https://github.com/markrmiller/starburst

I've done a ton. Mostly very late at night, it's not all perfect yet, some
of it may be exploratory. There is a lot to do to wrap it up with a bow.
This touches a lot of spots, our surface area of features is just huge now.

Basically I have a high performance Solr fork at the moment (only setup for
tests, not actually running stand alone Solr). I don't know how or when (or
to be completely honest, if) it comes home. I'm going to do what I can, but
it's likely to require more than me to be successful in a reasonable time
frame.

I have a couple JIRA issues open for HTTP/2 and the new SolrClient.

Mark


-- 
- Mark
about.me/markrmiller


[jira] [Updated] (SOLR-12429) ZK upconfig throws confusing error when it encounters a symlink

2018-05-30 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12429:

Description: 
If a configset being uploaded to ZK contains a symlink pointing at a directory, 
an error is thrown, but it doesn't explain the real problem.  The upconfig 
should detect symlinks and throw an error indicating that they aren't 
supported.  If we can detect any other type of file that upconfig can't use 
(sockets, device files, etc), the error message should be relevant.

{noformat}
Exception in thread "main" java.io.IOException: File 
'/var/solr/mbs/artist/conf/common' exists but is a directory
at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:286)
at 
org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1815)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:391)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:305)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:291)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils.uploadToZK(ZkMaintenanceUtils.java:291)
at 
org.apache.solr.common.cloud.SolrZkClient.uploadToZK(SolrZkClient.java:793)
at 
org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:78)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:236)
{noformat}

I have not tested whether a symlink pointing at a file works, but I think that 
an error should be thrown for ANY symlink.


> ZK upconfig throws confusing error when it encounters a symlink
> ---
>
> Key: SOLR-12429
> URL: https://issues.apache.org/jira/browse/SOLR-12429
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI
>Affects Versions: 7.3.1
>Reporter: Shawn Heisey
>Priority: Major
>
> If a configset being uploaded to ZK contains a symlink pointing at a 
> directory, an error is thrown, but it doesn't explain the real problem.  The 
> upconfig should detect symlinks and throw an error indicating that they 
> aren't supported.  If we can detect any other type of file that upconfig 
> can't use (sockets, device files, etc), the error message should be relevant.
> {noformat}
> Exception in thread "main" java.io.IOException: File 
> '/var/solr/mbs/artist/conf/common' exists but is a directory
>   at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:286)
>   at 
> org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1815)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:391)
>   at 
> org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:305)
>   at 
> org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:291)
>   at java.nio.file.Files.walkFileTree(Files.java:2670)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.solr.common.cloud.ZkMaintenanceUtils.uploadToZK(ZkMaintenanceUtils.java:291)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.uploadToZK(SolrZkClient.java:793)
>   at 
> org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:78)
>   at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:236)
> {noformat}
> I have not tested whether a symlink pointing at a file works, but I think 
> that an error should be thrown for ANY symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12429) ZK upconfig throws confusing error when it encounters a symlink

2018-05-30 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12429:

Environment: (was: If a configset being uploaded to ZK contains a 
symlink pointing at a directory, an error is thrown, but it doesn't explain the 
real problem.  The upconfig should detect symlinks and throw an error 
indicating that they aren't supported.  If we can detect any other type of file 
that upconfig can't use (sockets, device files, etc), the error message should 
be relevant.

{noformat}
Exception in thread "main" java.io.IOException: File 
'/var/solr/mbs/artist/conf/common' exists but is a directory
at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:286)
at 
org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1815)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:391)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:305)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:291)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils.uploadToZK(ZkMaintenanceUtils.java:291)
at 
org.apache.solr.common.cloud.SolrZkClient.uploadToZK(SolrZkClient.java:793)
at 
org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:78)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:236)
{noformat}

I have not tested whether a symlink pointing at a file works, but I think that 
an error should be thrown for ANY symlink.
)

> ZK upconfig throws confusing error when it encounters a symlink
> ---
>
> Key: SOLR-12429
> URL: https://issues.apache.org/jira/browse/SOLR-12429
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCLI
>Affects Versions: 7.3.1
>Reporter: Shawn Heisey
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12429) ZK upconfig throws confusing error when it encounters a symlink

2018-05-30 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-12429:
---

 Summary: ZK upconfig throws confusing error when it encounters a 
symlink
 Key: SOLR-12429
 URL: https://issues.apache.org/jira/browse/SOLR-12429
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCLI
Affects Versions: 7.3.1
 Environment: If a configset being uploaded to ZK contains a symlink 
pointing at a directory, an error is thrown, but it doesn't explain the real 
problem.  The upconfig should detect symlinks and throw an error indicating 
that they aren't supported.  If we can detect any other type of file that 
upconfig can't use (sockets, device files, etc), the error message should be 
relevant.

{noformat}
Exception in thread "main" java.io.IOException: File 
'/var/solr/mbs/artist/conf/common' exists but is a directory
at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:286)
at 
org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1815)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:391)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:305)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:291)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.solr.common.cloud.ZkMaintenanceUtils.uploadToZK(ZkMaintenanceUtils.java:291)
at 
org.apache.solr.common.cloud.SolrZkClient.uploadToZK(SolrZkClient.java:793)
at 
org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:78)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:236)
{noformat}

I have not tested whether a symlink pointing at a file works, but I think that 
an error should be thrown for ANY symlink.

Reporter: Shawn Heisey






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-05-30 Thread Greg Stein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495503#comment-16495503
 ] 

Greg Stein commented on SOLR-10299:
---

>From HipChat: "But we wouldn't want it exposed to the open Internet, so 
>accessing such a service from the documentation pages securely would require 
>server side code."

HTTPd config could easily limit the pages to (say) Apache committers, without 
any server side code.

Also note that if you're limiting the audience, then HA/failover should not be 
needed. It doesn't sound like you would need that for the smaller audience. Our 
VMs have great uptime, so failures would tend to be the software stack which 
generally means HA won't save you.

Point being: you can simplify your deployment.

(and also, that we tend to provide just *one* VM to each project, so asking for 
"several" is typically a non-starter)

Greg Stein
Infrastructure Administrator, ASF

 

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12428) Adding LTR jar to _default configset

2018-05-30 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-12428:
---

 Summary: Adding LTR jar to _default configset
 Key: SOLR-12428
 URL: https://issues.apache.org/jira/browse/SOLR-12428
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya
Assignee: Ishan Chattopadhyaya


Even though Solr comes out of the box with the LTR capabilities, it is not 
possible to use them in an existing collection without hand editing the 
solrconfig.xml to add the jar. Many other contrib jars are already present in 
the _default configset's solrconfig.xml.

I propose to add the ltr jar in the _default configset's solrconfig:
{code}
  
{code}

Any thoughts, [~cpoerschke]?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12427) Status 500 on Incorrect value for start and rows

2018-05-30 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495468#comment-16495468
 ] 

Munendra S N commented on SOLR-12427:
-

 [^SOLR-12427.patch] 
This one uses params.getInt() instead of implementing new method

> Status 500 on Incorrect value for start and rows
> 
>
> Key: SOLR-12427
> URL: https://issues.apache.org/jira/browse/SOLR-12427
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Trivial
> Attachments: SOLR-12427.patch, SOLR-12427.patch
>
>
> With SOLR-7254, 
> Cases, when start and rows are negatives, was handled but the case when an 
> invalid value is passed is not handled.
> Hence, Solr returns 500. It is better to return 400, as it is the client error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12427) Status 500 on Incorrect value for start and rows

2018-05-30 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12427:

Attachment: SOLR-12427.patch

> Status 500 on Incorrect value for start and rows
> 
>
> Key: SOLR-12427
> URL: https://issues.apache.org/jira/browse/SOLR-12427
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Trivial
> Attachments: SOLR-12427.patch, SOLR-12427.patch
>
>
> With SOLR-7254, 
> Cases, when start and rows are negatives, was handled but the case when an 
> invalid value is passed is not handled.
> Hence, Solr returns 500. It is better to return 400, as it is the client error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12427) Status 500 on Incorrect value for start and rows

2018-05-30 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12427:

Attachment: SOLR-12427.patch

> Status 500 on Incorrect value for start and rows
> 
>
> Key: SOLR-12427
> URL: https://issues.apache.org/jira/browse/SOLR-12427
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Priority: Trivial
> Attachments: SOLR-12427.patch
>
>
> With SOLR-7254, 
> Cases, when start and rows are negatives, was handled but the case when an 
> invalid value is passed is not handled.
> Hence, Solr returns 500. It is better to return 400, as it is the client error



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12395) Typo in SignificantTermsQParserPlugin.NAME

2018-05-30 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495418#comment-16495418
 ] 

Lucene/Solr QA commented on SOLR-12395:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 14s{color} 
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 42s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.search.TestStandardQParsers |
|   | solr.search.QueryEqualityTest |
|   | solr.cloud.autoscaling.sim.TestComputePlanAction |
|   | solr.metrics.rrd.SolrRrdBackendFactoryTest |
|   | solr.security.hadoop.TestDelegationWithHadoopAuth |
|   | solr.cloud.autoscaling.sim.TestTriggerIntegration |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
|   | solr.cloud.MultiThreadedOCPTest |
|   | solr.client.solrj.io.sql.JdbcTest |
|   | solr.common.cloud.TestCollectionStateWatchers |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12395 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925595/SOLR-12395.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / d27a2e8 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/108/artifact/out/patch-unit-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/108/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/108/testReport/ |
| modules | C: solr/core solr/solrj U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/108/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Typo in SignificantTermsQParserPlugin.NAME
> --
>
> Key: SOLR-12395
> URL: https://issues.apache.org/jira/browse/SOLR-12395
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.5, 7.3.1
>Reporter: Tobias Kässmann
>Assignee: Christine Poerschke
>Priority: Trivial
> Attachments: SOLR-12395.patch, SOLR-12395.patch
>
>
> I think there is a small typo in the {{SignificantTermsQParserPlugin}}:
> {code:java}
> public static final String NAME = "sigificantTerms";{code}
> should be:
> {code:java}
> public static final String NAME = "significantTerms";{code}
>  See the patch attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7161) TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery assertion error

2018-05-30 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495411#comment-16495411
 ] 

Alessandro Benedetti edited comment on LUCENE-7161 at 5/30/18 4:42 PM:
---

While refactoring the MoreLikeThis[1]  I just found this test awaiting fix 
 (  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-7161";)
 public void testMultiFieldShouldReturnPerFieldBooleanQuery ) 

Happy to help, where can I find a seed to reproduce and debug the test failure ?
 I tried the test seeds in this Jira, but all of them succeeds on my local 
machine 
 e.g.

ant test -Dtestcase=TestMoreLikeThis 
-Dtests.method=testMultiFieldShouldReturnPerFieldBooleanQuery 
-Dtests.seed=C802AA860A1EAE50 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=hi -Dtests.timezone=MST7MDT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[1] https://issues.apache.org/jira/browse/LUCENE-8326


was (Author: alessandro.benedetti):
While refactoring the MoreLikeThis[1]  I just found this test awaiting fix 
(  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-7161";)
public void testMultiFieldShouldReturnPerFieldBooleanQuery ) 

Happy to help, where can I find a seed to reproduce and debug the test failure ?
I tried the test seeds in this Jira, but all of them succeeds on my local 
machine 
e.g.

ant test -Dtestcase=TestMoreLikeThis 
-Dtests.method=testMultiFieldShouldReturnPerFieldBooleanQuery 
-Dtests.seed=C802AA860A1EAE50 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=hi -Dtests.timezone=MST7MDT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[1] [#https://issues.apache.org/jira/browse/LUCENE-8326]

> TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery assertion 
> error
> ---
>
> Key: LUCENE-7161
> URL: https://issues.apache.org/jira/browse/LUCENE-7161
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: 6.7, 7.0
>
>
> I just hit this unrelated but reproducible on master 
> #cc75be53f9b3b86ec59cb93896c4fd5a9a5926b2 while tweaking earth's radius:
> {noformat}
>[junit4] Suite: org.apache.lucene.queries.mlt.TestMoreLikeThis
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestMoreLikeThis 
> -Dtests.method=testMultiFieldShouldReturnPerFieldBooleanQuery 
> -Dtests.seed=794526110651C8E6 -Dtests.locale=es-HN 
> -Dtests.timezone=Brazil/West -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.25s | 
> TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([794526110651C8E6:1DF67ED7BBBF4E1D]:0)
>[junit4]>  at 
> org.apache.lucene.queries.mlt.TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery(TestMoreLikeThis.java:320)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=ClassicSimilarity, locale=es-HN, timezone=Brazil/West
>[junit4]   2> NOTE: Linux 3.13.0-71-generic amd64/Oracle Corporation 
> 1.8.0_60 (64-bit)/cpus=8,threads=1,free=409748864,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [TestMoreLikeThis]
>[junit4] Completed [1/1 (1!)] in 0.45s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: 794526110651C8E6]:
>[junit4]   - 
> org.apache.lucene.queries.mlt.TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery
> {noformat}
> Likely related to LUCENE-6954?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7161) TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery assertion error

2018-05-30 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495411#comment-16495411
 ] 

Alessandro Benedetti commented on LUCENE-7161:
--

While refactoring the MoreLikeThis[1]  I just found this test awaiting fix 
(  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-7161";)
public void testMultiFieldShouldReturnPerFieldBooleanQuery ) 

Happy to help, where can I find a seed to reproduce and debug the test failure ?
I tried the test seeds in this Jira, but all of them succeeds on my local 
machine 
e.g.

ant test -Dtestcase=TestMoreLikeThis 
-Dtests.method=testMultiFieldShouldReturnPerFieldBooleanQuery 
-Dtests.seed=C802AA860A1EAE50 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=hi -Dtests.timezone=MST7MDT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[1] [#https://issues.apache.org/jira/browse/LUCENE-8326]

> TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery assertion 
> error
> ---
>
> Key: LUCENE-7161
> URL: https://issues.apache.org/jira/browse/LUCENE-7161
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: 6.7, 7.0
>
>
> I just hit this unrelated but reproducible on master 
> #cc75be53f9b3b86ec59cb93896c4fd5a9a5926b2 while tweaking earth's radius:
> {noformat}
>[junit4] Suite: org.apache.lucene.queries.mlt.TestMoreLikeThis
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestMoreLikeThis 
> -Dtests.method=testMultiFieldShouldReturnPerFieldBooleanQuery 
> -Dtests.seed=794526110651C8E6 -Dtests.locale=es-HN 
> -Dtests.timezone=Brazil/West -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.25s | 
> TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([794526110651C8E6:1DF67ED7BBBF4E1D]:0)
>[junit4]>  at 
> org.apache.lucene.queries.mlt.TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery(TestMoreLikeThis.java:320)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=ClassicSimilarity, locale=es-HN, timezone=Brazil/West
>[junit4]   2> NOTE: Linux 3.13.0-71-generic amd64/Oracle Corporation 
> 1.8.0_60 (64-bit)/cpus=8,threads=1,free=409748864,total=504889344
>[junit4]   2> NOTE: All tests run in this JVM: [TestMoreLikeThis]
>[junit4] Completed [1/1 (1!)] in 0.45s, 1 test, 1 failure <<< FAILURES!
>[junit4] 
>[junit4] 
>[junit4] Tests with failures [seed: 794526110651C8E6]:
>[junit4]   - 
> org.apache.lucene.queries.mlt.TestMoreLikeThis.testMultiFieldShouldReturnPerFieldBooleanQuery
> {noformat}
> Likely related to LUCENE-6954?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12271) Analytics Component reads negative float and double field values incorrectly

2018-05-30 Thread Dennis Gove (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove resolved SOLR-12271.

Resolution: Fixed

> Analytics Component reads negative float and double field values incorrectly
> 
>
> Key: SOLR-12271
> URL: https://issues.apache.org/jira/browse/SOLR-12271
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the analytics component uses the incorrect way of converting 
> numeric doc values longs to doubles and floats.
> The fix is easy and the tests now cover this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8300) Add unordered-distinct IntervalsSource

2018-05-30 Thread Matt Weber (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495389#comment-16495389
 ] 

Matt Weber commented on LUCENE-8300:


Thank you [~romseygeek]!

> Add unordered-distinct IntervalsSource
> --
>
> Key: LUCENE-8300
> URL: https://issues.apache.org/jira/browse/LUCENE-8300
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8300.patch, LUCENE-8300.patch
>
>
> [~mattweber] pointed out on LUCENE-8196 that {{Intervals.unordered()}} 
> doesn't check to see if its subintervals overlap, which means that for 
> example {{Intervals.unordered(Intervals.term("a"), Intervals.term("a"))}} 
> would match a document with {{a}} appearing only once.  This ticket will 
> introduce a new function, {{Intervals.unordered_distinct()}}, that ensures 
> that all subintervals within an unordered interval do not overlap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12271) Analytics Component reads negative float and double field values incorrectly

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495345#comment-16495345
 ] 

ASF subversion and git services commented on SOLR-12271:


Commit 528b96540e4d65ff69bc4f2d6e0f78615c5e317e in lucene-solr's branch 
refs/heads/branch_7x from [~houstonputman]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=528b965 ]

SOLR-12271: Updating changes.txt


> Analytics Component reads negative float and double field values incorrectly
> 
>
> Key: SOLR-12271
> URL: https://issues.apache.org/jira/browse/SOLR-12271
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the analytics component uses the incorrect way of converting 
> numeric doc values longs to doubles and floats.
> The fix is easy and the tests now cover this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12271) Analytics Component reads negative float and double field values incorrectly

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495342#comment-16495342
 ] 

ASF subversion and git services commented on SOLR-12271:


Commit 0ef8e5aa800845d63a3c848a646aa08afa24f0e6 in lucene-solr's branch 
refs/heads/master from [~houstonputman]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ef8e5a ]

SOLR-12271: Fix for analytics component reading negative values from double and 
float fields.


> Analytics Component reads negative float and double field values incorrectly
> 
>
> Key: SOLR-12271
> URL: https://issues.apache.org/jira/browse/SOLR-12271
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the analytics component uses the incorrect way of converting 
> numeric doc values longs to doubles and floats.
> The fix is easy and the tests now cover this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12271) Analytics Component reads negative float and double field values incorrectly

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495343#comment-16495343
 ] 

ASF subversion and git services commented on SOLR-12271:


Commit d243f35a5480163fb02e1d36541bf115cec35172 in lucene-solr's branch 
refs/heads/master from [~houstonputman]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d243f35 ]

SOLR-12271: Updating changes.txt


> Analytics Component reads negative float and double field values incorrectly
> 
>
> Key: SOLR-12271
> URL: https://issues.apache.org/jira/browse/SOLR-12271
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the analytics component uses the incorrect way of converting 
> numeric doc values longs to doubles and floats.
> The fix is easy and the tests now cover this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12271) Analytics Component reads negative float and double field values incorrectly

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495344#comment-16495344
 ] 

ASF subversion and git services commented on SOLR-12271:


Commit a02876c9be85ceb7008d53a78a555eb14f28eb1e in lucene-solr's branch 
refs/heads/branch_7x from [~houstonputman]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a02876c ]

SOLR-12271: Fix for analytics component reading negative values from double and 
float fields.


> Analytics Component reads negative float and double field values incorrectly
> 
>
> Key: SOLR-12271
> URL: https://issues.apache.org/jira/browse/SOLR-12271
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the analytics component uses the incorrect way of converting 
> numeric doc values longs to doubles and floats.
> The fix is easy and the tests now cover this use case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495325#comment-16495325
 ] 

ASF subversion and git services commented on SOLR-11779:


Commit 0e4512c23149a0c9968ccf5a49dd9e3ea01072e7 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0e4512c ]

SOLR-11779: Use fixed Locale for graph labels.


> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> SOLR-11779.patch, c1.png, c2.png, core.json, d1.png, d2.png, d3.png, 
> jvm-list.json, jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495323#comment-16495323
 ] 

ASF subversion and git services commented on SOLR-11779:


Commit 1676c08b73084152e0727cfd5c0e984ca4fa641d in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1676c08 ]

SOLR-11779: Use fixed Locale for graph labels.


> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> SOLR-11779.patch, c1.png, c2.png, core.json, d1.png, d2.png, d3.png, 
> jvm-list.json, jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495322#comment-16495322
 ] 

ASF subversion and git services commented on SOLR-11779:


Commit 090159f9aa6d0285e674bcdc172386c4f4925847 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=090159f ]

SOLR-11779: Basic long-term collection of aggregated metrics.


> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> SOLR-11779.patch, c1.png, c2.png, core.json, d1.png, d2.png, d3.png, 
> jvm-list.json, jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8300) Add unordered-distinct IntervalsSource

2018-05-30 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8300.
---
   Resolution: Fixed
Fix Version/s: 7.4

> Add unordered-distinct IntervalsSource
> --
>
> Key: LUCENE-8300
> URL: https://issues.apache.org/jira/browse/LUCENE-8300
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8300.patch, LUCENE-8300.patch
>
>
> [~mattweber] pointed out on LUCENE-8196 that {{Intervals.unordered()}} 
> doesn't check to see if its subintervals overlap, which means that for 
> example {{Intervals.unordered(Intervals.term("a"), Intervals.term("a"))}} 
> would match a document with {{a}} appearing only once.  This ticket will 
> introduce a new function, {{Intervals.unordered_distinct()}}, that ensures 
> that all subintervals within an unordered interval do not overlap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8300) Add unordered-distinct IntervalsSource

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495287#comment-16495287
 ] 

ASF subversion and git services commented on LUCENE-8300:
-

Commit e3d4c7e9b746f77482bec0b5bb82e94adde12da3 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3d4c7e ]

LUCENE-8300: Allow unordered intervals to exclude overlaps


> Add unordered-distinct IntervalsSource
> --
>
> Key: LUCENE-8300
> URL: https://issues.apache.org/jira/browse/LUCENE-8300
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8300.patch, LUCENE-8300.patch
>
>
> [~mattweber] pointed out on LUCENE-8196 that {{Intervals.unordered()}} 
> doesn't check to see if its subintervals overlap, which means that for 
> example {{Intervals.unordered(Intervals.term("a"), Intervals.term("a"))}} 
> would match a document with {{a}} appearing only once.  This ticket will 
> introduce a new function, {{Intervals.unordered_distinct()}}, that ensures 
> that all subintervals within an unordered interval do not overlap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8300) Add unordered-distinct IntervalsSource

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495286#comment-16495286
 ] 

ASF subversion and git services commented on LUCENE-8300:
-

Commit 083dc0811bd44fe434ecaaad892383d48a17d2a8 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=083dc08 ]

LUCENE-8300: Allow unordered intervals to exclude overlaps


> Add unordered-distinct IntervalsSource
> --
>
> Key: LUCENE-8300
> URL: https://issues.apache.org/jira/browse/LUCENE-8300
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8300.patch, LUCENE-8300.patch
>
>
> [~mattweber] pointed out on LUCENE-8196 that {{Intervals.unordered()}} 
> doesn't check to see if its subintervals overlap, which means that for 
> example {{Intervals.unordered(Intervals.term("a"), Intervals.term("a"))}} 
> would match a document with {{a}} appearing only once.  This ticket will 
> introduce a new function, {{Intervals.unordered_distinct()}}, that ensures 
> that all subintervals within an unordered interval do not overlap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191807600
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -184,14 +179,7 @@ public String getHashableId() {
 return getHashableId(solrDoc);
   }
 
-  public List getDocsList() {
-if (docsList == null) {
-  buildDocsList();
-}
-return docsList;
-  }
-
-  private void buildDocsList() {
+  public List computeFlattenedDocs() {
 List all = flatten(solrDoc);
 
 String idField = getHashableId();
--- End diff --

rename rootId?  "field" feels very wrong in this variable name.  


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191805956
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -206,13 +194,13 @@ private void buildDocsList() {
   // TODO: if possible concurrent modification exception (if 
SolrInputDocument not cloned and is being forwarded to replicas)
   // then we could add this field to the generated lucene document 
instead.
 }
-docsList = all;
+return all;
   }
 
   private List flatten(SolrInputDocument root) {
 List unwrappedDocs = new ArrayList<>();
 if(root.hasChildDocuments()) {
--- End diff --

this condition/guard isn't needed


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12374) Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495279#comment-16495279
 ] 

ASF subversion and git services commented on SOLR-12374:


Commit 9aa16b64c741294bb8e48d0a19fd5ae4b072b359 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9aa16b6 ]

SOLR-12374: Added SolrCore.withSearcher(lambda) convenience.
* and fixed SnapShooter.getIndexCommit bug forgetting to decref (rare?)


> Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)
> -
>
> Key: SOLR-12374
> URL: https://issues.apache.org/jira/browse/SOLR-12374
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12374.patch
>
>
> I propose adding the following to SolrCore:
> {code:java}
>   /**
>* Executes the lambda with the {@link SolrIndexSearcher}.  This is more 
> convenience than using
>* {@link #getSearcher()} since there is no ref-counting business to worry 
> about.
>* Example:
>* 
>*   IndexReader reader = 
> h.getCore().withSearcher(SolrIndexSearcher::getIndexReader);
>* 
>*/
>   @SuppressWarnings("unchecked")
>   public  R withSearcher(Function lambda) {
> final RefCounted refCounted = getSearcher();
> try {
>   return lambda.apply(refCounted.get());
> } finally {
>   refCounted.decref();
> }
>   }
> {code}
> This is a nice tight convenience method, avoiding the clumsy RefCounted API 
> which is easy to accidentally incorrectly use – see 
> https://issues.apache.org/jira/browse/SOLR-11616?focusedCommentId=16477719&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16477719
> I guess my only (small) concern is if hypothetically you might make the 
> lambda short because it's easy to do that (see the one-liner example above) 
> but the object you return that you're interested in  (say IndexReader) could 
> potentially become invalid if the SolrIndexSearcher closes.  But I think/hope 
> that's impossible normally based on when this getSearcher() used?  I could at 
> least add a warning to the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12374) Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)

2018-05-30 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495280#comment-16495280
 ] 

David Smiley commented on SOLR-12374:
-

woops; thanks!

> Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)
> -
>
> Key: SOLR-12374
> URL: https://issues.apache.org/jira/browse/SOLR-12374
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12374.patch
>
>
> I propose adding the following to SolrCore:
> {code:java}
>   /**
>* Executes the lambda with the {@link SolrIndexSearcher}.  This is more 
> convenience than using
>* {@link #getSearcher()} since there is no ref-counting business to worry 
> about.
>* Example:
>* 
>*   IndexReader reader = 
> h.getCore().withSearcher(SolrIndexSearcher::getIndexReader);
>* 
>*/
>   @SuppressWarnings("unchecked")
>   public  R withSearcher(Function lambda) {
> final RefCounted refCounted = getSearcher();
> try {
>   return lambda.apply(refCounted.get());
> } finally {
>   refCounted.decref();
> }
>   }
> {code}
> This is a nice tight convenience method, avoiding the clumsy RefCounted API 
> which is easy to accidentally incorrectly use – see 
> https://issues.apache.org/jira/browse/SOLR-11616?focusedCommentId=16477719&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16477719
> I guess my only (small) concern is if hypothetically you might make the 
> lambda short because it's easy to do that (see the one-liner example above) 
> but the object you return that you're interested in  (say IndexReader) could 
> potentially become invalid if the SolrIndexSearcher closes.  But I think/hope 
> that's impossible normally based on when this getSearcher() used?  I could at 
> least add a warning to the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12425) How to change schema field type from string to tdate.

2018-05-30 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12425.
---
Resolution: Information Provided

This issue tracker is not a support portal. Please raise this question on the 
user's list at solr-u...@lucene.apache.org, see: 
(http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are a 
_lot_ more people watching that list who may be able to help and you'll 
probably get responses much more quickly.

If it's determined that this really is a code issue in Solr and not a 
configuration/usage problem, we can raise a new JIRA or reopen this one.

Short form (and please use the user's list for any further clarification):

You can't. You must reindex from scratch, preferably into a new collection. Or 
use a new field defined properly and index into that.

> How to change schema field type from string to tdate.
> -
>
> Key: SOLR-12425
> URL: https://issues.apache.org/jira/browse/SOLR-12425
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Vikash Kumar
>Priority: Major
>  Labels: error
>
> How to change schema field type from string to tdate as we already have lots 
> of data indexed and after just changing type into schema file getting 
> following error. Please help...
>  
> {
>  "error":{
>  "trace":"java.lang.NullPointerException\n\tat 
> org.apache.lucene.util.LegacyNumericUtils.prefixCodedToLong(LegacyNumericUtils.java:189)\n\tat
>  org.apache.solr.schema.TrieField.toObject(TrieField.java:157)\n\tat 
> org.apache.solr.schema.TrieDateField.toObject(TrieDateField.java:92)\n\tat 
> org.apache.solr.schema.TrieDateField.toObject(TrieDateField.java:85)\n\tat 
> org.apache.solr.schema.TrieField.write(TrieField.java:324)\n\tat 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:133)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:345)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:249)\n\tat
>  
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:151)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)\n\tat
>  
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)\n\tat
>  
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)\n\tat
>  
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:731)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:513)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceCo

[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495272#comment-16495272
 ] 

Jan Høydahl commented on SOLR-12358:


The SOLR-12375 entry is now double up :) 

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:530)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)\n\tat 
> org.eclipse.jetty.server.HttpConnectio

[jira] [Commented] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2018-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495267#comment-16495267
 ] 

Jan Høydahl commented on LUCENE-5143:
-

{noformat}
SUMMARY
===
Number of artifacts to check:  162
Number of artifacts checked :  162
Number of artifacts SUCCESS :  162
Number of artifacts FAILED  :0
{noformat}


> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: KEYS, KEYS, KEYS, KEYS, LUCENE-5143.patch, 
> LUCENE-5143.patch, LUCENE-5143.patch, LUCENE-5143.patch, 
> LUCENE-5143_READMEs.patch, LUCENE-5143_READMEs.patch, 
> LUCENE-5143_READMEs.patch, LUCENE_5143_KEYS.patch, verify.log, verify.sh, 
> verify.sh, verify.sh
>
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-30 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495264#comment-16495264
 ] 

Noble Paul commented on SOLR-12358:
---

Hope I have undone the damage

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:530)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(H

[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495250#comment-16495250
 ] 

ASF subversion and git services commented on SOLR-12358:


Commit f7b95c6db9a1a8173b2f5a6c6fe4d0a7ba035ec3 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f7b95c6 ]

SOLR-12358: reverting the changes caused by the merge


> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.han

[jira] [Commented] (SOLR-12374) Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)

2018-05-30 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495231#comment-16495231
 ] 

Yonik Seeley commented on SOLR-12374:
-

The CHANGES for 7.4 has:
* SOLR-12374: SnapShooter.getIndexCommit can forget to decref the searcher; 
though it's not clear in practice when.
 (David Smiley)

But it's missing on the master branch...

> Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)
> -
>
> Key: SOLR-12374
> URL: https://issues.apache.org/jira/browse/SOLR-12374
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12374.patch
>
>
> I propose adding the following to SolrCore:
> {code:java}
>   /**
>* Executes the lambda with the {@link SolrIndexSearcher}.  This is more 
> convenience than using
>* {@link #getSearcher()} since there is no ref-counting business to worry 
> about.
>* Example:
>* 
>*   IndexReader reader = 
> h.getCore().withSearcher(SolrIndexSearcher::getIndexReader);
>* 
>*/
>   @SuppressWarnings("unchecked")
>   public  R withSearcher(Function lambda) {
> final RefCounted refCounted = getSearcher();
> try {
>   return lambda.apply(refCounted.get());
> } finally {
>   refCounted.decref();
> }
>   }
> {code}
> This is a nice tight convenience method, avoiding the clumsy RefCounted API 
> which is easy to accidentally incorrectly use – see 
> https://issues.apache.org/jira/browse/SOLR-11616?focusedCommentId=16477719&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16477719
> I guess my only (small) concern is if hypothetically you might make the 
> lambda short because it's easy to do that (see the one-liner example above) 
> but the object you return that you're interested in  (say IndexReader) could 
> potentially become invalid if the SolrIndexSearcher closes.  But I think/hope 
> that's impossible normally based on when this getSearcher() used?  I could at 
> least add a warning to the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12417) velocity response writer v.json should enforce valid function name

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495225#comment-16495225
 ] 

ASF subversion and git services commented on SOLR-12417:


Commit 0c31969e6c73b9037e55bffcb907842745a7c3cc in lucene-solr's branch 
refs/heads/branch_7x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c31969 ]

SOLR-12417: enforce valid function name for v.json


> velocity response writer v.json should enforce valid function name
> --
>
> Key: SOLR-12417
> URL: https://issues.apache.org/jira/browse/SOLR-12417
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: VelocityResponseWriter should enforce that v.json 
> parameter is just a function name
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12417.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7830) topdocs facet function

2018-05-30 Thread Tim Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495223#comment-16495223
 ] 

Tim Owen commented on SOLR-7830:


I've attached a new patch, I took your original patch and updated it for the 7x 
branch, then added distributed search support (the merging and re-sorting).

We wanted this functionality as it's really useful to fetch 1 or 2 sample 
documents with each bucket for some of our use-cases, and this approach of 
using the topdocs aggregate function works really nicely.

The only limitation is that the sorting for distributed searches can only work 
with field sorting, not with functional sorting, and you can only sort by 
fields that are included in the results (otherwise it would need to include the 
sort values in shard responses - this could be done, but it was more complex 
and we didn't need that for our use-case). Also, the offset parameter isn't 
used, but we felt pagination of these topdocs was quite niche (but it could be 
added to this patch).

> topdocs facet function
> --
>
> Key: SOLR-7830
> URL: https://issues.apache.org/jira/browse/SOLR-7830
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Major
> Attachments: ALT-SOLR-7830.patch, SOLR-7830.patch, SOLR-7830.patch
>
>
> A topdocs() facet function would return the top N documents per facet bucket.
> This would be a big step toward unifying grouping and the new facet module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191785849
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -250,6 +261,7 @@ private void recUnwrapRelations(List 
unwrappedDocs, SolrInput
 recUnwrapRelations(unwrappedDocs, currentDoc, false);
   }
 
+  /** Extract all anonymous child documents from parent. */
   private void recUnwrapp(List unwrappedDocs, 
SolrInputDocument currentDoc, boolean isRoot) {
--- End diff --

Sure thing


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191785710
  
--- Diff: solr/core/src/test/org/apache/solr/update/AddBlockUpdateTest.java 
---
@@ -693,7 +668,6 @@ private void 
indexSolrInputDocumentsDirectly(List docs) throw
   h.getCore().getUpdateHandler().addDoc(updateCmd);
   updateCmd.clear();
 }
-assertU(commit());
--- End diff --

Sure I'll add it back my bad


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7830) topdocs facet function

2018-05-30 Thread Tim Owen (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-7830:
---
Attachment: ALT-SOLR-7830.patch

> topdocs facet function
> --
>
> Key: SOLR-7830
> URL: https://issues.apache.org/jira/browse/SOLR-7830
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Major
> Attachments: ALT-SOLR-7830.patch, SOLR-7830.patch, SOLR-7830.patch
>
>
> A topdocs() facet function would return the top N documents per facet bucket.
> This would be a big step toward unifying grouping and the new facet module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12417) velocity response writer v.json should enforce valid function name

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495215#comment-16495215
 ] 

ASF subversion and git services commented on SOLR-12417:


Commit 107fd24ec7849d245c701882d3009463787165a3 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=107fd24 ]

SOLR-12417: enforce valid function name for v.json


> velocity response writer v.json should enforce valid function name
> --
>
> Key: SOLR-12417
> URL: https://issues.apache.org/jira/browse/SOLR-12417
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: VelocityResponseWriter should enforce that v.json 
> parameter is just a function name
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12417.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8338) Ensure number returned for PendingDeletes are well defined

2018-05-30 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495212#comment-16495212
 ] 

Lucene/Solr QA commented on LUCENE-8338:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m 
25s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8338 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925561/LUCENE-8338.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 6ca0c5f |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/20/testReport/ |
| modules | C: lucene/core U: lucene/core |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/20/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Ensure number returned for PendingDeletes are well defined
> --
>
> Key: LUCENE-8338
> URL: https://issues.apache.org/jira/browse/LUCENE-8338
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8338.patch, LUCENE-8338.patch
>
>
>  Today a call to PendingDeletes#numPendingDeletes might return 0
> if the deletes are written to disk. This doesn't mean these values are 
> committed
> or refreshed in the latest reader. Some places in IW use these numbers to 
> make
> decisions if there has been deletes added since last time checked 
> (BufferedUpdateStream)
> which can cause wrong (while not fatal) decision ie. to kick of new 
> merges.
> 
> Now this API is made protected and not visible outside of PendingDeletes 
> to prevent
> any kind of confusion. The APIs now allow to get absolute numbers of 
> getDelCount and numDocs
> which have the same name and semantics as their relatives on 
> IndexReader/Writer
> and SegmentCommitInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191781137
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -250,6 +261,7 @@ private void recUnwrapRelations(List 
unwrappedDocs, SolrInput
 recUnwrapRelations(unwrappedDocs, currentDoc, false);
   }
 
+  /** Extract all anonymous child documents from parent. */
   private void recUnwrapp(List unwrappedDocs, 
SolrInputDocument currentDoc, boolean isRoot) {
--- End diff --

Oh; I didn't know -- the docs then suggest to me this method ought to have 
a clearer name ;-)`e.g. recUnwrapAnonymous`


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191780669
  
--- Diff: solr/core/src/test/org/apache/solr/update/AddBlockUpdateTest.java 
---
@@ -693,7 +668,6 @@ private void 
indexSolrInputDocumentsDirectly(List docs) throw
   h.getCore().getUpdateHandler().addDoc(updateCmd);
   updateCmd.clear();
 }
-assertU(commit());
--- End diff --

this part was fine, especially for nested docs since we want there to be a 
higher likelihood of more segments since if there's a bug pertaining to not 
adding docs in a block, then more segments will help prove this out.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191782336
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -228,11 +225,25 @@ private void buildDocsList() {
 return unwrappedDocs;
   }
 
+  private static Collection 
getChildDocumentsKeys(SolrInputDocument doc) {
--- End diff --

But this is only called in one place; no?  I think it's better if 
recUnwrapRelations does this logic itself as it avoids the intermediate HashSet.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191780149
  
--- Diff: solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java 
---
@@ -2181,16 +2182,38 @@ public boolean compareSolrInputDocument(Object 
expected, Object actual) {
 Iterator iter1 = sdoc1.getFieldNames().iterator();
 Iterator iter2 = sdoc2.getFieldNames().iterator();
 
-if(iter1.hasNext()) {
+while (iter1.hasNext()) {
--- End diff --

ouch; good catch!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191776629
  
--- Diff: solr/core/src/test/org/apache/solr/update/AddBlockUpdateTest.java 
---
@@ -550,14 +531,8 @@ public void testJavaBinCodecNestedRelation() throws 
IOException {
 try (JavaBinCodec jbc = new JavaBinCodec(); InputStream is = new 
ByteArrayInputStream(buffer)) {
   result = (SolrInputDocument) jbc.unmarshal(is);
 }
-assertEquals(2 + childsNum, result.size());
-assertEquals("v1", result.getFieldValue("parent_f1"));
-assertEquals("v2", result.getFieldValue("parent_f2"));
-
-for(Map.Entry entry: children.entrySet()) {
-  compareSolrInputDocument(entry.getValue(), 
result.getFieldValue(entry.getKey()));
-}
 
+assertTrue(compareSolrInputDocument(topDocument, result));
--- End diff --

cool; you're teaching this old dog some new tricks :-)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191775263
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/IgnoreLargeDocumentProcessorFactory.java
 ---
@@ -165,11 +165,9 @@ private static long fastEstimate(Map 
map) {
 if (value instanceof Map) {
   size += fastEstimate(entry.getValue());
--- End diff --

Cast entry.getValue() to a Map to dispatch directly to the appropriate 
fastEstimate overloaded method.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191779761
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -417,7 +417,8 @@ private void addAndDelete(AddUpdateCommand cmd, 
List deletesAfter
   }
 
   private Term getIdTerm(AddUpdateCommand cmd) {
--- End diff --

I think it's trappy/dangerous to invoke getDocsList() which is an innocent 
looking getter that actually flattens the input and caches it rendering any 
possible further changes to the unflattend docs silently ignored.  Can you 
change this method to accept the List of flattened documents as its argument?  
And change getDocsList to not cache the result and be named something like 
computeFlattenedDocs()


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-05-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r191775964
  
--- Diff: solr/core/src/test/org/apache/solr/update/AddBlockUpdateTest.java 
---
@@ -269,37 +269,27 @@ public void testExceptionThrown() throws Exception {
 
   @Test
   public void testSolrNestedFieldsList() throws Exception {
-SolrInputDocument document1 = new SolrInputDocument() {
-  {
-final String id = id();
-addField("id", id);
-addField("parent_s", "X");
-addField("children",
-new ArrayList()
-{
-  {
-add(sdoc("id", id(), "child_s", "y"));
-add(sdoc("id", id(), "child_s", "z"));
-  }
-});
-  }
-};
 
-SolrInputDocument document2 = new SolrInputDocument() {
-  {
-final String id = id();
-addField("id", id);
-addField("parent_s", "A");
-addField("children",
-new ArrayList()
-{
-  {
-add(sdoc("id", id(), "child_s", "b"));
-add(sdoc("id", id(), "child_s", "c"));
-  }
-});
-  }
-};
+final String id1 = id();
+List children1 = new ArrayList()
--- End diff --

Please avoid anonymous subclasses that exist for the sole purpose of 
building.  I think it's just as concise and clear to say 
`Arrays.asList(sdoc(...), sdoc(...), sdoc(...))`


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495202#comment-16495202
 ] 

Jan Høydahl edited comment on SOLR-12358 at 5/30/18 2:08 PM:
-

Your commit a875300a897521bc618d5072b20fcd60c8f13985 also screwed up more 
unrelated changes entries for 7.4, beyond what [~jpountz] and [~dsmiley] 
already fixed. Here are the top-most:
{code:java}
-Carrot2 3.16.0
+Carrot2 3.15.0

-Jetty 9.3.20.v20170531
+Jetty 9.4.10.v20180503

-* SOLR-12396: Upgrade Carrot2 to 3.16.0, HPPC to 0.8.1, morfologik to 2.1.5. 
(Dawid Weiss)
{code}
But there are other unrelated changes as well. [~noble.paul] please go through 
them all from the initial commit and fix all unrelated changes.

Here is [the CHANGES 
diff|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/CHANGES.txt;h=99ff4b8d9bb877c4c8aaa66a8aa7ad9389f3a2c6;hp=2537d373aecfcbfa74d026525fd77064d134f576;hb=a875300;hpb=dc0dc1d6e3947114362f0104f8c8ae51e8e1ba36],
 which should really just add one changes entry but it rearranges and touches a 
bunch of other issues, which makes it really hard to cherry-pick from master to 
7x these days :(


was (Author: janhoy):
Your commit a875300a897521bc618d5072b20fcd60c8f13985 also screwed up more 
unrelated changes entries for 7.4, beyond what [~jpountz] and [~dsmiley] 
already fixed. Here are the top-most:
{code:java}
-Carrot2 3.16.0
+Carrot2 3.15.0

-Jetty 9.3.20.v20170531
+Jetty 9.4.10.v20180503

-* SOLR-12396: Upgrade Carrot2 to 3.16.0, HPPC to 0.8.1, morfologik to 2.1.5. 
(Dawid Weiss)
{code}
But there are other unrelated changes as well. [~noble.paul] please go through 
them all from the initial commit and fix all unrelated changes.

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandl

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1036 - Still Failing

2018-05-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1036/

No tests ran.

Build Log:
[...truncated 24176 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2214 links (1768 relative) to 3097 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-leve

[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495202#comment-16495202
 ] 

Jan Høydahl commented on SOLR-12358:


Your commit a875300a897521bc618d5072b20fcd60c8f13985 also screwed up more 
unrelated changes entries for 7.4, beyond what [~jpountz] and [~dsmiley] 
already fixed. Here are the top-most:
{code:java}
-Carrot2 3.16.0
+Carrot2 3.15.0

-Jetty 9.3.20.v20170531
+Jetty 9.4.10.v20180503

-* SOLR-12396: Upgrade Carrot2 to 3.16.0, HPPC to 0.8.1, morfologik to 2.1.5. 
(Dawid Weiss)
{code}
But there are other unrelated changes as well. [~noble.paul] please go through 
them all from the initial commit and fix all unrelated changes.

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCo

[jira] [Updated] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-30 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12392:
-
Fix Version/s: master (8.0)
   7.4

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495189#comment-16495189
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit 0ea764c139f83948c1b0e03e1fa47392cbec7c02 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ea764c ]

SOLR-12392: Fix waitForElapsed logic and state restoration. Enable the test.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-30 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-12392.
--
Resolution: Fixed

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2018-05-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-5143:

Attachment: LUCENE-5143.patch

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: KEYS, KEYS, KEYS, KEYS, LUCENE-5143.patch, 
> LUCENE-5143.patch, LUCENE-5143.patch, LUCENE-5143.patch, 
> LUCENE-5143_READMEs.patch, LUCENE-5143_READMEs.patch, 
> LUCENE-5143_READMEs.patch, LUCENE_5143_KEYS.patch, verify.log, verify.sh, 
> verify.sh, verify.sh
>
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-30 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495181#comment-16495181
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit d27a2e8996199c395482d06284f5582eeaa8c181 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d27a2e8 ]

SOLR-12392: Fix waitForElapsed logic and state restoration. Enable the test.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2018-05-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-5143:

Attachment: verify.sh

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: KEYS, KEYS, KEYS, KEYS, LUCENE-5143.patch, 
> LUCENE-5143.patch, LUCENE-5143.patch, LUCENE-5143_READMEs.patch, 
> LUCENE-5143_READMEs.patch, LUCENE-5143_READMEs.patch, LUCENE_5143_KEYS.patch, 
> verify.log, verify.sh, verify.sh, verify.sh
>
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12423) Upgrade to Tika 1.19 when available and refactor to use the ForkParser

2018-05-30 Thread Tim Allison (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated SOLR-12423:
---
Environment: (was: in Tika 1.19)

> Upgrade to Tika 1.19 when available and refactor to use the ForkParser
> --
>
> Key: SOLR-12423
> URL: https://issues.apache.org/jira/browse/SOLR-12423
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tim Allison
>Priority: Major
>
> In Tika 1.19, there will be the ability to call the ForkParser and specify a 
> directory of jars from which to load the classes for the Parser in the child 
> processes. This will allow us to remove all of the parser dependencies from 
> Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar 
> in the child process’ bin directory and be done with the upgrade... no more 
> fiddly dependency upgrades and threat of jar hell.
>  
> The ForkParser also protects against ooms, infinite loops and jvm crashes. 
> W00t!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12422) Update Ref Guide to recommend against using the ExtractingRequestHandler in production

2018-05-30 Thread Tim Allison (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated SOLR-12422:
---
Description: 
[~elyograg] recently updated the wiki to include the hard-learned guidance that 
the ExtractingRequestHandler should not be used in production. [~ctargett] 
recommended updating the reference guide instead. Let’s update the ref guide.

 

...note to self...don't open issue on tiny screen...sorry for the clutter...

> Update Ref Guide to recommend against using the ExtractingRequestHandler in 
> production
> --
>
> Key: SOLR-12422
> URL: https://issues.apache.org/jira/browse/SOLR-12422
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tim Allison
>Priority: Major
>
> [~elyograg] recently updated the wiki to include the hard-learned guidance 
> that the ExtractingRequestHandler should not be used in production. 
> [~ctargett] recommended updating the reference guide instead. Let’s update 
> the ref guide.
>  
> ...note to self...don't open issue on tiny screen...sorry for the clutter...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12422) Update Ref Guide to recommend against using the ExtractingRequestHandler in production

2018-05-30 Thread Tim Allison (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated SOLR-12422:
---
Environment: (was: Shawn Heisey recently updated the wiki to include 
the hard-learned guidance that the ExtractingRequestHandler should not be used 
in production. Cassandra Targett recommended updating the reference guide 
instead. Let’s update the ref guide.)

> Update Ref Guide to recommend against using the ExtractingRequestHandler in 
> production
> --
>
> Key: SOLR-12422
> URL: https://issues.apache.org/jira/browse/SOLR-12422
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tim Allison
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2018-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495158#comment-16495158
 ] 

Jan Høydahl commented on LUCENE-5143:
-

Uploaded an updated KEYS file with Cao's GPG key.

Do you guys feel confident that the changes are safe (i.e. the various 
verifications proved above, including the latest verify.sh), or do you need 
other tests?

I can commit this now before 7.4.0

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: KEYS, KEYS, KEYS, KEYS, LUCENE-5143.patch, 
> LUCENE-5143.patch, LUCENE-5143.patch, LUCENE-5143_READMEs.patch, 
> LUCENE-5143_READMEs.patch, LUCENE-5143_READMEs.patch, LUCENE_5143_KEYS.patch, 
> verify.log, verify.sh, verify.sh
>
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >