[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21410 - Unstable!

2018-02-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21410/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([CFA35B548F1067F9:A25FFFA9355898FE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:274)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13210 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
   

[JENKINS] Lucene-Solr-Tests-7.x - Build # 358 - Still Unstable

2018-02-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/358/

4 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:42865/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/11)={
   "pullReplicas":"0",   "replicationFactor":"3",   "shards":{ "shard1":{   
"range":"8000-d554",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"awhollynewcollection_0_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:35360/solr;,   
"node_name":"127.0.0.1:35360_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node5":{   
"core":"awhollynewcollection_0_shard1_replica_n2",   
"base_url":"http://127.0.0.1:32956/solr;,   
"node_name":"127.0.0.1:32956_solr",   "state":"active",   
"type":"NRT"}, "core_node7":{   
"core":"awhollynewcollection_0_shard1_replica_n4",   
"base_url":"http://127.0.0.1:42865/solr;,   
"node_name":"127.0.0.1:42865_solr",   "state":"active",   
"type":"NRT"}}}, "shard2":{   "range":"d555-2aa9",   
"state":"active",   "replicas":{ "core_node9":{   
"core":"awhollynewcollection_0_shard2_replica_n6",   
"base_url":"http://127.0.0.1:37411/solr;,   
"node_name":"127.0.0.1:37411_solr",   "state":"down",   
"type":"NRT"}, "core_node11":{   
"core":"awhollynewcollection_0_shard2_replica_n8",   
"base_url":"http://127.0.0.1:35360/solr;,   
"node_name":"127.0.0.1:35360_solr",   "state":"active",   
"type":"NRT"}, "core_node13":{   
"core":"awhollynewcollection_0_shard2_replica_n10",   
"base_url":"http://127.0.0.1:32956/solr;,   
"node_name":"127.0.0.1:32956_solr",   "state":"active",   
"type":"NRT"}}}, "shard3":{   "range":"2aaa-7fff",   
"state":"active",   "replicas":{ "core_node15":{   
"core":"awhollynewcollection_0_shard3_replica_n12",   
"base_url":"http://127.0.0.1:42865/solr;,   
"node_name":"127.0.0.1:42865_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node17":{   
"core":"awhollynewcollection_0_shard3_replica_n14",   
"base_url":"http://127.0.0.1:37411/solr;,   
"node_name":"127.0.0.1:37411_solr",   "state":"active",   
"type":"NRT"}, "core_node18":{   
"core":"awhollynewcollection_0_shard3_replica_n16",   
"base_url":"http://127.0.0.1:35360/solr;,   
"node_name":"127.0.0.1:35360_solr",   "state":"active",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"3",  
 "autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"} with 
live_nodes=[127.0.0.1:32956_solr, 127.0.0.1:35360_solr, 127.0.0.1:42865_solr, 
127.0.0.1:37411_solr]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42865/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/11)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{
"shard1":{
  "range":"8000-d554",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"awhollynewcollection_0_shard1_replica_n1",
  "base_url":"http://127.0.0.1:35360/solr;,
  "node_name":"127.0.0.1:35360_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node5":{
  "core":"awhollynewcollection_0_shard1_replica_n2",
  "base_url":"http://127.0.0.1:32956/solr;,
  "node_name":"127.0.0.1:32956_solr",
  "state":"active",
  "type":"NRT"},
"core_node7":{
  "core":"awhollynewcollection_0_shard1_replica_n4",
  "base_url":"http://127.0.0.1:42865/solr;,
  "node_name":"127.0.0.1:42865_solr",
  "state":"active",
  "type":"NRT"}}},
"shard2":{
  "range":"d555-2aa9",
  "state":"active",
  "replicas":{
"core_node9":{
  "core":"awhollynewcollection_0_shard2_replica_n6",
  "base_url":"http://127.0.0.1:37411/solr;,
  "node_name":"127.0.0.1:37411_solr",
  "state":"down",
  "type":"NRT"},
"core_node11":{
  "core":"awhollynewcollection_0_shard2_replica_n8",
  

[jira] [Created] (SOLR-11954) Search behavior depends on kind of synonym mappings

2018-02-06 Thread Alexandr (JIRA)
Alexandr created SOLR-11954:
---

 Summary: Search behavior depends on kind of synonym mappings
 Key: SOLR-11954
 URL: https://issues.apache.org/jira/browse/SOLR-11954
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.2.1
Reporter: Alexandr


For field with such type
{noformat}

   
      
      
      
   
   
      
      
      
      
   
{noformat}
 If synonyms configured in next way
{noformat}
b=>b,boron
2=>ii,2{noformat}
Then for query "my_field:b2" parsedQuery looks so "my_field:b2 
Synonym(my_field:2 my_field:ii)"

But when synonyms configured in such way
{noformat}
b,boron
ii,2{noformat}
Then for query "my_field:b2" parsedQuery looks so "my_field:b2 my_field:\"b 2\" 
my_field:\"b ii\" my_field:\"boron 2\" my_field:\"boron ii\")"

The second query is correct (it uses synonyms for two parts after word split). 

Search behavior should not depends on kind of synonym mappings.

This issue also has been discussed in solr user mailing list
 
[http://lucene.472066.n3.nabble.com/SynonymGraphFilterFactory-with-WordDelimiterGraphFilterFactory-usage-td4373974.html]

It reproduced for me for Solr 7.1.0, but it also can be reproduced for 7.2.1 
version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11953) Include JTS with Solr

2018-02-06 Thread David Smiley (JIRA)
David Smiley created SOLR-11953:
---

 Summary: Include JTS with Solr
 Key: SOLR-11953
 URL: https://issues.apache.org/jira/browse/SOLR-11953
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spatial
Reporter: David Smiley


JTS 1.15.0 is dual-licensed, one of which is a BSD 3-clause.  LUCENE-8161 
upgrades Spatial4j to 0.7 and puts JTS on the test classpath for 
lucene-spatial-extras.  jts-core.jar weighs in at 779KB.  By including JTS in 
Solr, we make it easier for users to use more advanced spatial capabilities 
with Solr.  One of the pain points today is that you can't even place the JTS 
jar into a typical Solr lib dir; it has been necessary to put it in WEB-INF/lib 
due to how Spatial4j loads it indirectly.  No more.  This issue should address 
the ref guide page too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8161) Update to Spatial4j 0.7 (to support JTS 1.15)

2018-02-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354970#comment-16354970
 ] 

David Smiley commented on LUCENE-8161:
--

Based on our docs in LicenseType, the right one to use in this case is BSD_LIKE.

Lucene CHANGES.txt:
{noformat}
Other

* LUCENE-8616: spatial-extras: the Spatial4j dependency has been updated from 
0.6 to 0.7, 
  which is drop-in compatible (Lucene doesn't expressly use any of the few API 
differences).
  Spatial4j 0.7 is compatible with JTS 1.15.0 and not any prior version.  JTS 
1.15.0 is
  dual-licensed to include BSD; prior versions were LGPL.  (David Smiley)
{noformat}

I'll file a follow-up issue for Solr.

> Update to Spatial4j 0.7 (to support JTS 1.15)
> -
>
> Key: LUCENE-8161
> URL: https://issues.apache.org/jira/browse/LUCENE-8161
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8161_Spatial4j_0_7_and_add_JTS_1_15_0.patch
>
>
> Spatial4j 0.7 was released late December 2017, principally with support for 
> JTS 1.15.  There are some other changes less pertinent to Lucene/Solr but 
> I'll refer to the change list: 
> [https://github.com/locationtech/spatial4j/blob/spatial4j-0.7/CHANGES.md]
> This JTS release has an API breakage in that the package root was changed 
> from {{com.vividsolutions}} to {{org.locationtech}} but should otherwise be 
> compatible. JTS is now dual-licensed as EPL 1.0 and EDL 1.0 (a BSD style 
> 3-clause license). This JTS release also included various improvements, 
> including faster LineString intersection.  That performance improvement was 
> found in the context of Lucene spatial-extras real-world use.
> Anyone using JTS with lucene-spatial-extras will be forced to update to JTS 
> 1.15.  I'd like to add a test dependency from lucene-spatial-extras to JTS 
> (the BSD licensed version of course) as there is at least one test with a 
> JUnit "assumeTrue" on it being on the classpath – JtsPolygonTest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 942 - Still Failing

2018-02-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/942/

No tests ran.

Build Log:
[...truncated 28602 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (32.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.04 sec (786.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.2 MB in 0.08 sec (907.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.6 MB in 0.10 sec (830.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6247 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6247 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (254.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.6 MB in 0.05 sec (989.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.6 MB in 0.14 sec (1051.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.6 MB in 0.14 sec (1064.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 357 - Still Unstable

2018-02-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/357/

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.ValueFacetTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.facet.ValueFacetTest: 1) Thread[id=72, 
name=qtp250559835-72, state=TIMED_WAITING, group=TGRP-ValueFacetTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.analytics.facet.ValueFacetTest: 
   1) Thread[id=72, name=qtp250559835-72, state=TIMED_WAITING, 
group=TGRP-ValueFacetTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([FE64A6BBC0A09CF2]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.ValueFacetTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=72, 
name=qtp250559835-72, state=TIMED_WAITING, group=TGRP-ValueFacetTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=72, name=qtp250559835-72, state=TIMED_WAITING, 
group=TGRP-ValueFacetTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([FE64A6BBC0A09CF2]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([D6E1B8D140A8A33C:5EDC31AE7A684291]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState(TriggerIntegrationTest.java:426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Commented] (SOLR-10653) After Upgrade from 5.3.1 to 6.4.2, Solr is storing certain fields like UUID, BigDecimal, Enums as :

2018-02-06 Thread Thomas Heigl (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354537#comment-16354537
 ] 

Thomas Heigl commented on SOLR-10653:
-

I just ran into the same issue when upgrading from Solr 4.10 to Solr 7.1. Do I 
have to change all my enum fields to String values to fix this or is there some 
kind of configuration for SolrJ?

> After Upgrade from 5.3.1 to 6.4.2, Solr is storing certain fields like UUID, 
> BigDecimal, Enums as :
> 
>
> Key: SOLR-10653
> URL: https://issues.apache.org/jira/browse/SOLR-10653
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 6.4.2
>Reporter: Sudharshan Krishnamurthy
>Priority: Major
>
> Originally being in 5.3.1 when supplying object types such as java.util.UUID, 
> Enum, BigDecimal supplied to SolrInputDocument the conversion to 
> corresponding data types defined in the Solr schema happened just fine in 
> this case string, string, float respectively. After the upgrade to 6.4.2 
> version, I see that when such values are supplied to SolrInputDocument, while 
> saving it gets stored as 
> "java.util.UUID:0997e78e-6e3d-4824-8c52-8cc15533e541" with UUID for example 
> and fully qualified name of the class for Enums etc. Hence while 
> deserializing we are getting errors such as 
> Invalid UUID String: 'java.util.UUID:0997e78e-6e3d-4824-8c52-8cc15533e541'
> Although converting these fields to String before supplying to 
> SolrInputDocument or converting to varchar for delta import queries seem to 
> fix the problem. I wonder what changed between the 2 versions to have me do 
> this String or varchar conversion that was not required before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11459) AddUpdateCommand#prevVersion is not cleared which may lead to problem for in-place updates of non existed documents

2018-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354476#comment-16354476
 ] 

ASF subversion and git services commented on SOLR-11459:


Commit 359ae6fe6efa06feaefa67ea2465589e92dd52bb in lucene-solr's branch 
refs/heads/branch_7x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=359ae6f ]

SOLR-11459: Fix in-place nonexistent doc update following existing doc update

Applying again. No changes.


> AddUpdateCommand#prevVersion is not cleared which may lead to problem for 
> in-place updates of non existed documents
> ---
>
> Key: SOLR-11459
> URL: https://issues.apache.org/jira/browse/SOLR-11459
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.0
>Reporter: Andrey Kudryavtsev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-11459.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I have a 1_shard / *m*_replicas SolrCloud cluster with Solr 6.6.0 and run 
> batches of 5 - 10k in-place updates from time to time. 
> Once I noticed that job "hangs" - it started and couldn't finish for a a 
> while.
> Logs were full of messages like:
> {code} Missing update, on which current in-place update depends on, hasn't 
> arrived. id=__, looking for version=___, last found version=0"  {code}
> {code} 
> Tried to fetch document ___ from the leader, but the leader says document has 
> been deleted. Deleting the document here and skipping this update: Last found 
> version: 0, was looking for: ___",24,0,"but the leader says document has been 
> deleted. Deleting the document here and skipping this update: Last found 
> version: 0
> {code}
> Further analysis shows that:
> * There are 100-500 updates for non-existed documents among other updates 
> (something that I have to deal with)
> * Leader receives bunch of updates and executes this updates one by one. 
> {{JavabinLoader}} which is used by processing documents reuses same instance 
> of {{AddUpdateCommand}} for every update and just [clearing its state at the 
> end|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99].
>  Field [AddUpdateCommand#prevVersion| 
> https://github.com/apache/lucene-solr/blob/6396cb759f8c799f381b0730636fa412761030ce/solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java#L76]
>  is not cleared.   
> * In case of update is in-place update, but specified document does not 
> exist, this update is processed as a regular atomic update (i.e. new doc is 
> created), but {{prevVersion}} is used as a {{distrib.inplace.prevversion}} 
> parameter in sequential calls to every slave in DistributedUpdateProcessor. 
> {{prevVersion}} wasn't cleared, so it may contain version from previous 
> processed update.
> * Slaves checks it's own version of documents which is 0 (cause doc does not 
> exist), slave thinks that some updates were missed and spends 5 seconds in 
> [DistributedUpdateProcessor#waitForDependentUpdates|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99]
>  waiting for missed updates (no luck) and also tries to get "correct" version 
> from leader (no luck as well) 
> * So update for non existed document costs *m* * 5 sec each
> I workarounded this by explicit check of doc existence, but it probably 
> should be fixed.
> Obviously first guess is that  prevVersion should be cleared in 
> {{AddUpdateCommand#clear}}, but have no clue how to test it.
> {code}
> +++ solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java   
> (revision )
> @@ -78,6 +78,7 @@
>   updateTerm = null;
>   isLastDocInBatch = false;
>   version = 0;
> + prevVersion = -1;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11459) AddUpdateCommand#prevVersion is not cleared which may lead to problem for in-place updates of non existed documents

2018-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354426#comment-16354426
 ] 

ASF subversion and git services commented on SOLR-11459:


Commit 2c2a03f01d9a2fbbd7031d3f15a971b5aeb0c598 in lucene-solr's branch 
refs/heads/branch_7x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c2a03f ]

Revert "SOLR-11459: Fix in-place nonexistent doc update following existing doc 
update"

This reverts commit 76d94c19bb8f6480ec0119ad77d6601432b7099b.


> AddUpdateCommand#prevVersion is not cleared which may lead to problem for 
> in-place updates of non existed documents
> ---
>
> Key: SOLR-11459
> URL: https://issues.apache.org/jira/browse/SOLR-11459
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.0
>Reporter: Andrey Kudryavtsev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-11459.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I have a 1_shard / *m*_replicas SolrCloud cluster with Solr 6.6.0 and run 
> batches of 5 - 10k in-place updates from time to time. 
> Once I noticed that job "hangs" - it started and couldn't finish for a a 
> while.
> Logs were full of messages like:
> {code} Missing update, on which current in-place update depends on, hasn't 
> arrived. id=__, looking for version=___, last found version=0"  {code}
> {code} 
> Tried to fetch document ___ from the leader, but the leader says document has 
> been deleted. Deleting the document here and skipping this update: Last found 
> version: 0, was looking for: ___",24,0,"but the leader says document has been 
> deleted. Deleting the document here and skipping this update: Last found 
> version: 0
> {code}
> Further analysis shows that:
> * There are 100-500 updates for non-existed documents among other updates 
> (something that I have to deal with)
> * Leader receives bunch of updates and executes this updates one by one. 
> {{JavabinLoader}} which is used by processing documents reuses same instance 
> of {{AddUpdateCommand}} for every update and just [clearing its state at the 
> end|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99].
>  Field [AddUpdateCommand#prevVersion| 
> https://github.com/apache/lucene-solr/blob/6396cb759f8c799f381b0730636fa412761030ce/solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java#L76]
>  is not cleared.   
> * In case of update is in-place update, but specified document does not 
> exist, this update is processed as a regular atomic update (i.e. new doc is 
> created), but {{prevVersion}} is used as a {{distrib.inplace.prevversion}} 
> parameter in sequential calls to every slave in DistributedUpdateProcessor. 
> {{prevVersion}} wasn't cleared, so it may contain version from previous 
> processed update.
> * Slaves checks it's own version of documents which is 0 (cause doc does not 
> exist), slave thinks that some updates were missed and spends 5 seconds in 
> [DistributedUpdateProcessor#waitForDependentUpdates|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99]
>  waiting for missed updates (no luck) and also tries to get "correct" version 
> from leader (no luck as well) 
> * So update for non existed document costs *m* * 5 sec each
> I workarounded this by explicit check of doc existence, but it probably 
> should be fixed.
> Obviously first guess is that  prevVersion should be cleared in 
> {{AddUpdateCommand#clear}}, but have no clue how to test it.
> {code}
> +++ solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java   
> (revision )
> @@ -78,6 +78,7 @@
>   updateTerm = null;
>   isLastDocInBatch = false;
>   version = 0;
> + prevVersion = -1;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #314: SOLR-11459 Clear AddUpdateCommand#prevVersion...

2018-02-06 Thread werder06
Github user werder06 closed the pull request at:

https://github.com/apache/lucene-solr/pull/314


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11459) AddUpdateCommand#prevVersion is not cleared which may lead to problem for in-place updates of non existed documents

2018-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354302#comment-16354302
 ] 

ASF subversion and git services commented on SOLR-11459:


Commit 76d94c19bb8f6480ec0119ad77d6601432b7099b in lucene-solr's branch 
refs/heads/branch_7x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=76d94c1 ]

SOLR-11459: Fix in-place nonexistent doc update following existing doc update


> AddUpdateCommand#prevVersion is not cleared which may lead to problem for 
> in-place updates of non existed documents
> ---
>
> Key: SOLR-11459
> URL: https://issues.apache.org/jira/browse/SOLR-11459
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.0
>Reporter: Andrey Kudryavtsev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-11459.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I have a 1_shard / *m*_replicas SolrCloud cluster with Solr 6.6.0 and run 
> batches of 5 - 10k in-place updates from time to time. 
> Once I noticed that job "hangs" - it started and couldn't finish for a a 
> while.
> Logs were full of messages like:
> {code} Missing update, on which current in-place update depends on, hasn't 
> arrived. id=__, looking for version=___, last found version=0"  {code}
> {code} 
> Tried to fetch document ___ from the leader, but the leader says document has 
> been deleted. Deleting the document here and skipping this update: Last found 
> version: 0, was looking for: ___",24,0,"but the leader says document has been 
> deleted. Deleting the document here and skipping this update: Last found 
> version: 0
> {code}
> Further analysis shows that:
> * There are 100-500 updates for non-existed documents among other updates 
> (something that I have to deal with)
> * Leader receives bunch of updates and executes this updates one by one. 
> {{JavabinLoader}} which is used by processing documents reuses same instance 
> of {{AddUpdateCommand}} for every update and just [clearing its state at the 
> end|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99].
>  Field [AddUpdateCommand#prevVersion| 
> https://github.com/apache/lucene-solr/blob/6396cb759f8c799f381b0730636fa412761030ce/solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java#L76]
>  is not cleared.   
> * In case of update is in-place update, but specified document does not 
> exist, this update is processed as a regular atomic update (i.e. new doc is 
> created), but {{prevVersion}} is used as a {{distrib.inplace.prevversion}} 
> parameter in sequential calls to every slave in DistributedUpdateProcessor. 
> {{prevVersion}} wasn't cleared, so it may contain version from previous 
> processed update.
> * Slaves checks it's own version of documents which is 0 (cause doc does not 
> exist), slave thinks that some updates were missed and spends 5 seconds in 
> [DistributedUpdateProcessor#waitForDependentUpdates|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99]
>  waiting for missed updates (no luck) and also tries to get "correct" version 
> from leader (no luck as well) 
> * So update for non existed document costs *m* * 5 sec each
> I workarounded this by explicit check of doc existence, but it probably 
> should be fixed.
> Obviously first guess is that  prevVersion should be cleared in 
> {{AddUpdateCommand#clear}}, but have no clue how to test it.
> {code}
> +++ solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java   
> (revision )
> @@ -78,6 +78,7 @@
>   updateTerm = null;
>   isLastDocInBatch = false;
>   version = 0;
> + prevVersion = -1;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11459) AddUpdateCommand#prevVersion is not cleared which may lead to problem for in-place updates of non existed documents

2018-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354268#comment-16354268
 ] 

ASF subversion and git services commented on SOLR-11459:


Commit c50806824005b979a7e4854af38b2d8071bc52c0 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c508068 ]

SOLR-11459: Fix in-place nonexistent doc update following existing doc update


> AddUpdateCommand#prevVersion is not cleared which may lead to problem for 
> in-place updates of non existed documents
> ---
>
> Key: SOLR-11459
> URL: https://issues.apache.org/jira/browse/SOLR-11459
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.0
>Reporter: Andrey Kudryavtsev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-11459.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I have a 1_shard / *m*_replicas SolrCloud cluster with Solr 6.6.0 and run 
> batches of 5 - 10k in-place updates from time to time. 
> Once I noticed that job "hangs" - it started and couldn't finish for a a 
> while.
> Logs were full of messages like:
> {code} Missing update, on which current in-place update depends on, hasn't 
> arrived. id=__, looking for version=___, last found version=0"  {code}
> {code} 
> Tried to fetch document ___ from the leader, but the leader says document has 
> been deleted. Deleting the document here and skipping this update: Last found 
> version: 0, was looking for: ___",24,0,"but the leader says document has been 
> deleted. Deleting the document here and skipping this update: Last found 
> version: 0
> {code}
> Further analysis shows that:
> * There are 100-500 updates for non-existed documents among other updates 
> (something that I have to deal with)
> * Leader receives bunch of updates and executes this updates one by one. 
> {{JavabinLoader}} which is used by processing documents reuses same instance 
> of {{AddUpdateCommand}} for every update and just [clearing its state at the 
> end|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99].
>  Field [AddUpdateCommand#prevVersion| 
> https://github.com/apache/lucene-solr/blob/6396cb759f8c799f381b0730636fa412761030ce/solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java#L76]
>  is not cleared.   
> * In case of update is in-place update, but specified document does not 
> exist, this update is processed as a regular atomic update (i.e. new doc is 
> created), but {{prevVersion}} is used as a {{distrib.inplace.prevversion}} 
> parameter in sequential calls to every slave in DistributedUpdateProcessor. 
> {{prevVersion}} wasn't cleared, so it may contain version from previous 
> processed update.
> * Slaves checks it's own version of documents which is 0 (cause doc does not 
> exist), slave thinks that some updates were missed and spends 5 seconds in 
> [DistributedUpdateProcessor#waitForDependentUpdates|https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/loader/JavabinLoader.java#L99]
>  waiting for missed updates (no luck) and also tries to get "correct" version 
> from leader (no luck as well) 
> * So update for non existed document costs *m* * 5 sec each
> I workarounded this by explicit check of doc existence, but it probably 
> should be fixed.
> Obviously first guess is that  prevVersion should be cleared in 
> {{AddUpdateCommand#clear}}, but have no clue how to test it.
> {code}
> +++ solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java   
> (revision )
> @@ -78,6 +78,7 @@
>   updateTerm = null;
>   isLastDocInBatch = false;
>   version = 0;
> + prevVersion = -1;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11950) CLUSTERSTATUS shards parameter does not accept comma delimited list

2018-02-06 Thread Chris Ulicny (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354255#comment-16354255
 ] 

Chris Ulicny commented on SOLR-11950:
-

Superficially seems like a simple fix. I've uploaded a patch with a test as a 
starting point.

The test seemed to work, but I'm not sure if I ran it correctly or not.

> CLUSTERSTATUS shards parameter does not accept comma delimited list
> ---
>
> Key: SOLR-11950
> URL: https://issues.apache.org/jira/browse/SOLR-11950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.3.1, 7.2, master (8.0), 7.2.1
>Reporter: Chris Ulicny
>Priority: Minor
>  Labels: collection-api
> Attachments: SOLR-11950.patch
>
>
> According to the documentation for the Collections API, the CLUSTERSTATUS 
> action should accept a comma delimited list of shards if specified. However, 
> when specifying a comma delimited list, it is treated as a single value 
> instead of being parsed into multiple values.
> The request
> .../collections?action=CLUSTERSTATUS=test_collection=shard1,shard2
> yields the response:
> {"responseHeader":\{"status":400,"QTime":5},"error":\{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"Collection:
>  test_collection shard: shard1,shard2 not found","code":400}}
> instead of locating both shard1 and shard2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11950) CLUSTERSTATUS shards parameter does not accept comma delimited list

2018-02-06 Thread Chris Ulicny (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Ulicny updated SOLR-11950:

Attachment: SOLR-11950.patch

> CLUSTERSTATUS shards parameter does not accept comma delimited list
> ---
>
> Key: SOLR-11950
> URL: https://issues.apache.org/jira/browse/SOLR-11950
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.3.1, 7.2, master (8.0), 7.2.1
>Reporter: Chris Ulicny
>Priority: Minor
>  Labels: collection-api
> Attachments: SOLR-11950.patch
>
>
> According to the documentation for the Collections API, the CLUSTERSTATUS 
> action should accept a comma delimited list of shards if specified. However, 
> when specifying a comma delimited list, it is treated as a single value 
> instead of being parsed into multiple values.
> The request
> .../collections?action=CLUSTERSTATUS=test_collection=shard1,shard2
> yields the response:
> {"responseHeader":\{"status":400,"QTime":5},"error":\{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"Collection:
>  test_collection shard: shard1,shard2 not found","code":400}}
> instead of locating both shard1 and shard2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10308) Solr fails to work with Guava 21.0

2018-02-06 Thread Vincent Massol (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354154#comment-16354154
 ] 

Vincent Massol commented on SOLR-10308:
---

{quote}
On XWiki and Solr: Why would anyone EVER use EmbeddedSolrServer in a production 
application? There is absolutely no way to provide high availability, 
redundancy, load balancing, etc.
{quote}

I can answer that one ;) Simply because XWiki provides the embedded SOLR as the 
default and it works very well for the majority of use cases (about 99.99% of 
them). When an install requires a more elaborate SOLR setup (with all the 
features you mentioned), XWiki allows to use an external SOLR instance.


> Solr fails to work with Guava 21.0
> --
>
> Key: SOLR-10308
> URL: https://issues.apache.org/jira/browse/SOLR-10308
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 6.4.2
>Reporter: Vincent Massol
>Priority: Major
>
> This is what we get:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
>   at 
> org.apache.solr.handler.component.HighlightComponent.prepare(HighlightComponent.java:118)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
>   at 
> org.xwiki.search.solr.internal.AbstractSolrInstance.query(AbstractSolrInstance.java:117)
>   at 
> org.xwiki.query.solr.internal.SolrQueryExecutor.execute(SolrQueryExecutor.java:122)
>   at 
> org.xwiki.query.internal.DefaultQueryExecutorManager.execute(DefaultQueryExecutorManager.java:72)
>   at 
> org.xwiki.query.internal.SecureQueryExecutorManager.execute(SecureQueryExecutorManager.java:67)
>   at org.xwiki.query.internal.DefaultQuery.execute(DefaultQuery.java:287)
>   at org.xwiki.query.internal.ScriptQuery.execute(ScriptQuery.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384)
>   at 
> org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173)
>   ... 183 more
> {noformat}
> Guava 21 has removed some signature that solr is currently using.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10308) Solr fails to work with Guava 21.0

2018-02-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354149#comment-16354149
 ] 

Shawn Heisey commented on SOLR-10308:
-

bq. XWiki is stuck with Guava 20 (on which it's working well by the way so you 
might consider upgrading at least to this version) because it's embedding Solr.

Solr supports indexes in HDFS, which means that Solr has Hadoop dependencies.  
The version of Hadoop included with Solr doesn't work with newer Guava 
versions.  Fixing the problems with the core Solr code related to Guava 21 is 
easy, but there is little we can do about the hadoop -- it's a completely 
separate project.  The vast majority of Solr users don't use HDFS, and those 
users apparently can upgrade to Guava 20, but the project as a whole cannot 
upgrade.

Hadoop 3.0 has been upgraded to Guava 18 (they were at version 11!).  I don't 
know if this means that it would be compatible with Guava 21 or later, though.  
See HADOOP-10101 and SOLR-9515.

On XWiki and Solr: Why would anyone EVER use EmbeddedSolrServer in a production 
application?  There is absolutely no way to provide high availability, 
redundancy, load balancing, etc.


> Solr fails to work with Guava 21.0
> --
>
> Key: SOLR-10308
> URL: https://issues.apache.org/jira/browse/SOLR-10308
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 6.4.2
>Reporter: Vincent Massol
>Priority: Major
>
> This is what we get:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
>   at 
> org.apache.solr.handler.component.HighlightComponent.prepare(HighlightComponent.java:118)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
>   at 
> org.xwiki.search.solr.internal.AbstractSolrInstance.query(AbstractSolrInstance.java:117)
>   at 
> org.xwiki.query.solr.internal.SolrQueryExecutor.execute(SolrQueryExecutor.java:122)
>   at 
> org.xwiki.query.internal.DefaultQueryExecutorManager.execute(DefaultQueryExecutorManager.java:72)
>   at 
> org.xwiki.query.internal.SecureQueryExecutorManager.execute(SecureQueryExecutorManager.java:67)
>   at org.xwiki.query.internal.DefaultQuery.execute(DefaultQuery.java:287)
>   at org.xwiki.query.internal.ScriptQuery.execute(ScriptQuery.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384)
>   at 
> org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173)
>   ... 183 more
> {noformat}
> Guava 21 has removed some signature that solr is currently using.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10308) Solr fails to work with Guava 21.0

2018-02-06 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354052#comment-16354052
 ] 

Julian Reschke commented on SOLR-10308:
---

Removing the Guava dependency (or alternatively allowing a greater range of 
Guava versions, including 21.0) would be appreciated (see 
https://issues.apache.org/jira/browse/OAK-7182).

> Solr fails to work with Guava 21.0
> --
>
> Key: SOLR-10308
> URL: https://issues.apache.org/jira/browse/SOLR-10308
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 6.4.2
>Reporter: Vincent Massol
>Priority: Major
>
> This is what we get:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
>   at 
> org.apache.solr.handler.component.HighlightComponent.prepare(HighlightComponent.java:118)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
>   at 
> org.xwiki.search.solr.internal.AbstractSolrInstance.query(AbstractSolrInstance.java:117)
>   at 
> org.xwiki.query.solr.internal.SolrQueryExecutor.execute(SolrQueryExecutor.java:122)
>   at 
> org.xwiki.query.internal.DefaultQueryExecutorManager.execute(DefaultQueryExecutorManager.java:72)
>   at 
> org.xwiki.query.internal.SecureQueryExecutorManager.execute(SecureQueryExecutorManager.java:67)
>   at org.xwiki.query.internal.DefaultQuery.execute(DefaultQuery.java:287)
>   at org.xwiki.query.internal.ScriptQuery.execute(ScriptQuery.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384)
>   at 
> org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173)
>   ... 183 more
> {noformat}
> Guava 21 has removed some signature that solr is currently using.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1471 - Still Unstable

2018-02-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1471/

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.feature.TestFieldLengthFeature

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.ltr.feature.TestFieldLengthFeature: 1) Thread[id=88, 
name=qtp1192070498-88, state=TIMED_WAITING, group=TGRP-TestFieldLengthFeature]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.ltr.feature.TestFieldLengthFeature: 
   1) Thread[id=88, name=qtp1192070498-88, state=TIMED_WAITING, 
group=TGRP-TestFieldLengthFeature]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A539D143B4BF2303]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.feature.TestFieldLengthFeature

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=88, 
name=qtp1192070498-88, state=TIMED_WAITING, group=TGRP-TestFieldLengthFeature]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=88, name=qtp1192070498-88, state=TIMED_WAITING, 
group=TGRP-TestFieldLengthFeature]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A539D143B4BF2303]:0)


FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:38320/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:43978/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:38320/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:43978/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([970C435508514221:3DC190A7BF8297F1]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 

[jira] [Assigned] (SOLR-11952) Add updatingRegress Stream Decorator to support Streaming Regression

2018-02-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11952:
-

Assignee: Joel Bernstein

> Add updatingRegress Stream Decorator to support Streaming Regression
> 
>
> Key: SOLR-11952
> URL: https://issues.apache.org/jira/browse/SOLR-11952
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The current *olsRegress* function works on in-memory matrices. It would be 
> nice to also have an online updating multivariate regression implementation 
> that works with data sets of any size.
> This ticket will add support for *Miller Updating Regression*, which can be 
> used with data sets of any size. An *UpdatingRegressionStream* Stream 
> Decorator will be added to support this functionality. The function name will 
> likely be called *updatingRegress*.
> The implementation will be provided by Apache Commons Math.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11597) Implement RankNet.

2018-02-06 Thread Michael A. Alcorn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353937#comment-16353937
 ] 

Michael A. Alcorn commented on SOLR-11597:
--

Hi, [~cpoerschke]. Please see my latest commit 
[here|https://github.com/apache/lucene-solr/pull/270/commits/96746d1e97380848d06eaea03d52a892af6f3794].

{quote}So I think we can stick with the matrix approach here...{quote}

Based on this feedback and the rest of the discussion in this Jira, the model 
representation now takes the following form:

{code:json}
"layers" : [
{
"matrix" : [ [ 1.0, 2.0, 3.0 ],
 [ 4.0, 5.0, 6.0 ],
 [ 7.0, 8.0, 9.0 ],
 [ 10.0, 11.0, 12.0 ] ],
"bias" : [ 13.0, 14.0, 15.0, 16.0 ],
"activation" : "relu"
},
{
"matrix" : [ [ 17.0, 18.0, 19.0, 20.0 ],
 [ 21.0, 22.0, 23.0, 24.0 ] ],
"bias" : [ 25.0, 26.0 ],
"activation" : "relu"
},
{
"matrix" : [ [ 27.0, 28.0 ] ],
"bias" : [ 29.0 ],
"activation" : "none"
}
]
{code}

I've also added a {{Layer}} class that I believe makes the logic of the 
calculations clearer.

bq. On the non-linearity vs. activation-function choice, let's go with 
activation or activation function.

Done ("activation").

bq. And can I suggest we take a function/functor style approach...

Yep, agreed that's a better approach.

bq. Might it be helpful to include the input values somehow...

Added them to {{explain}}.

bq. Using StringBuilder in explain...

Done.

> Implement RankNet.
> --
>
> Key: SOLR-11597
> URL: https://issues.apache.org/jira/browse/SOLR-11597
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Michael A. Alcorn
>Assignee: Christine Poerschke
>Priority: Major
> Fix For: 7.3
>
>
> Implement RankNet as described in [this 
> tutorial|https://github.com/airalcorn2/Solr-LTR].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11952) Add updatingRegress Stream Decorator to support Streaming Regression

2018-02-06 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11952:
-

 Summary: Add updatingRegress Stream Decorator to support Streaming 
Regression
 Key: SOLR-11952
 URL: https://issues.apache.org/jira/browse/SOLR-11952
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The current *olsRegress* function works on in-memory matrices. It would be nice 
to also have an online updating multivariate regression implementation that 
works with data sets of any size.

This ticket will add support for *Miller Updating Regression*, which can be 
used with data sets of any size. An *UpdatingRegressionStream* Stream Decorator 
will be added to support this functionality. The function name will likely be 
called *updatingRegress*.

The implementation will be provided by Apache Commons Math.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11951) Add fitMultiVariateNormalMixtureDistribution Stream Evaluator

2018-02-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11951:
-

Assignee: Joel Bernstein

> Add fitMultiVariateNormalMixtureDistribution Stream Evaluator
> -
>
> Key: SOLR-11951
> URL: https://issues.apache.org/jira/browse/SOLR-11951
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket adds Apache Commons Math *expectation maximization (EM)* 
> multivariate normal mixture distribution fitting implementation to the 
> Streaming Expression machine learning library. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11951) Add fitMultiVariateNormalMixtureDistribution Stream Evaluator

2018-02-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11951:
--
Description: This ticket adds Apache Commons Math *expectation maximization 
(EM)* multivariate normal mixture distribution fitting implementation to the 
Streaming Expression machine learning library.   (was: This ticket adds Apache 
Commons Math *expectation maximization* multivariate normal mixture 
distribution fitting implementation to the Streaming Expression machine 
learning library. )

> Add fitMultiVariateNormalMixtureDistribution Stream Evaluator
> -
>
> Key: SOLR-11951
> URL: https://issues.apache.org/jira/browse/SOLR-11951
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket adds Apache Commons Math *expectation maximization (EM)* 
> multivariate normal mixture distribution fitting implementation to the 
> Streaming Expression machine learning library. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11951) Add fitMultiVariateNormalMixtureDistribution Stream Evaluator

2018-02-06 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11951:
-

 Summary: Add fitMultiVariateNormalMixtureDistribution Stream 
Evaluator
 Key: SOLR-11951
 URL: https://issues.apache.org/jira/browse/SOLR-11951
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds Apache Commons Math *expectation maximization* multivariate 
normal mixture distribution fitting implementation to the Streaming Expression 
machine learning library. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11928) ComputePlanActionTest failures

2018-02-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353879#comment-16353879
 ] 

Steve Rowe commented on SOLR-11928:
---

Another reproducing seed (failed 5/5 times) from my Jenkins:

{noformat}
Checking out Revision 812d400807bcebc782f85dcf3bba5619421880cb 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=ComputePlanActionTest -Dtests.method=testSelectedCollections 
-Dtests.seed=94419584B1E1359D -Dtests.slow=true -Dtests.locale=es-PE 
-Dtests.timezone=Pacific/Niue -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 17.6s J7  | ComputePlanActionTest.testSelectedCollections 
<<<
   [junit4]> Throwable #1: java.lang.AssertionError: The operations 
computed by ComputePlanAction should not be 
nullSolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([94419584B1E1359D:AEEF705D8F85ECF3]:0)
   [junit4]>at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testSelectedCollections(ComputePlanActionTest.java:469)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
docValues:{}, maxPointsInLeafNode=1160, maxMBSortInHeap=5.713506940839041, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@21846d8e),
 locale=es-PE, timezone=Pacific/Niue
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_151 (64-bit)/cpus=16,threads=1,free=144730888,total=364380160
{noformat}

> ComputePlanActionTest failures
> --
>
> Key: SOLR-11928
> URL: https://issues.apache.org/jira/browse/SOLR-11928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Major
>
> Policeman Jenkins found a master seed that reproduces 4/5 times for me 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21368/]:
> {noformat}
> Checking out Revision c56d774eb6555baa099fec22f290a9b5640a366d 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=ComputePlanActionTest 
> -Dtests.method=testNodeWithMultipleReplicasLost -Dtests.seed=CC46C76535920181 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kln-KE 
> -Dtests.timezone=Asia/Novokuznetsk -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 9.37s J0 | 
> ComputePlanActionTest.testNodeWithMultipleReplicasLost <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The operations 
> computed by ComputePlanAction should not be null 
> SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
> BEFORE_ACTION=[compute_plan, null]}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CC46C76535920181:FC8626E7BDE0E0DD]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost(ComputePlanActionTest.java:291)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: 
> codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
>  chunkSize=9, maxDocsPerChunk=3, blockSize=330), 
> termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
>  chunkSize=9, blockSize=330)), 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@674b4e9a),
>  locale=kln-KE, timezone=Asia/Novokuznetsk
>[junit4]   2> NOTE: Linux 4.13.0-32-generic amd64/Oracle Corporation 9.0.1 
> (64-bit)/cpus=8,threads=1,free=235331048,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11928) ComputePlanActionTest failures

2018-02-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-11928:
--
Summary: ComputePlanActionTest failures  (was: 
ComputePlanActionTest.testNodeWithMultipleReplicasLost() failure)

> ComputePlanActionTest failures
> --
>
> Key: SOLR-11928
> URL: https://issues.apache.org/jira/browse/SOLR-11928
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Major
>
> Policeman Jenkins found a master seed that reproduces 4/5 times for me 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21368/]:
> {noformat}
> Checking out Revision c56d774eb6555baa099fec22f290a9b5640a366d 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=ComputePlanActionTest 
> -Dtests.method=testNodeWithMultipleReplicasLost -Dtests.seed=CC46C76535920181 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kln-KE 
> -Dtests.timezone=Asia/Novokuznetsk -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 9.37s J0 | 
> ComputePlanActionTest.testNodeWithMultipleReplicasLost <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The operations 
> computed by ComputePlanAction should not be null 
> SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
> BEFORE_ACTION=[compute_plan, null]}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CC46C76535920181:FC8626E7BDE0E0DD]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost(ComputePlanActionTest.java:291)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: 
> codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
>  chunkSize=9, maxDocsPerChunk=3, blockSize=330), 
> termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
>  chunkSize=9, blockSize=330)), 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@674b4e9a),
>  locale=kln-KE, timezone=Asia/Novokuznetsk
>[junit4]   2> NOTE: Linux 4.13.0-32-generic amd64/Oracle Corporation 9.0.1 
> (64-bit)/cpus=8,threads=1,free=235331048,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11950) CLUSTERSTATUS shards parameter does not accept comma delimited list

2018-02-06 Thread Chris Ulicny (JIRA)
Chris Ulicny created SOLR-11950:
---

 Summary: CLUSTERSTATUS shards parameter does not accept comma 
delimited list
 Key: SOLR-11950
 URL: https://issues.apache.org/jira/browse/SOLR-11950
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Affects Versions: 7.2, 6.3.1, master (8.0), 7.2.1
Reporter: Chris Ulicny


According to the documentation for the Collections API, the CLUSTERSTATUS 
action should accept a comma delimited list of shards if specified. However, 
when specifying a comma delimited list, it is treated as a single value instead 
of being parsed into multiple values.

The request

.../collections?action=CLUSTERSTATUS=test_collection=shard1,shard2

yields the response:
{"responseHeader":\{"status":400,"QTime":5},"error":\{"metadata":["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":"Collection:
 test_collection shard: shard1,shard2 not found","code":400}}
instead of locating both shard1 and shard2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8157) GeoPolygon factory fails in recognize convex polygon

2018-02-06 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353862#comment-16353862
 ] 

Ignacio Vera commented on LUCENE-8157:
--

Great! Agree the method can and should be implemented more efficiently.

No hurry, thanks!

 

> GeoPolygon factory fails in recognize convex polygon
> 
>
> Key: LUCENE-8157
> URL: https://issues.apache.org/jira/browse/LUCENE-8157
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8157-plane.patch, LUCENE-8157-test.patch, 
> LUCENE-8157.patch
>
>
> When a polygon contains three consecutive points which are nearly co-planar, 
> the polygon factory may fail to recognize the concavity/convexity of the 
> polygon. I think the problem is the way the sideness for a polygon edge is 
> calculated. It relies in the position of the next point in respect of the 
> previous polygon edge which fails on the case explained above because of 
> numerical imprecision. The result is that sideness is messed up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8157) GeoPolygon factory fails in recognize convex polygon

2018-02-06 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353772#comment-16353772
 ] 

Karl Wright commented on LUCENE-8157:
-

[~ivera], that is fine but it creates a lot of objects implemented that way AND 
those objects get thrown away.

I'll be happy to fix it up later in the week though.



> GeoPolygon factory fails in recognize convex polygon
> 
>
> Key: LUCENE-8157
> URL: https://issues.apache.org/jira/browse/LUCENE-8157
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8157-plane.patch, LUCENE-8157-test.patch, 
> LUCENE-8157.patch
>
>
> When a polygon contains three consecutive points which are nearly co-planar, 
> the polygon factory may fail to recognize the concavity/convexity of the 
> polygon. I think the problem is the way the sideness for a polygon edge is 
> calculated. It relies in the position of the next point in respect of the 
> previous polygon edge which fails on the case explained above because of 
> numerical imprecision. The result is that sideness is messed up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 141 - Still Failing

2018-02-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/141/

4 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.deleteCollectionOnlyInZk

Error Message:
Error from server at https://127.0.0.1:35327/solr: Cannot create collection 
onlyinzk. Value of maxShardsPerNode is 1, and the number of nodes currently 
live or live and part of your createNodeSet is 1. This allows a maximum of 1 to 
be created. Value of numShards is 2, value of nrtReplicas is 1, value of 
tlogReplicas is 0 and value of pullReplicas is 0. This requires 2 shards to be 
created (higher than the allowed number)

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:35327/solr: Cannot create collection onlyinzk. 
Value of maxShardsPerNode is 1, and the number of nodes currently live or live 
and part of your createNodeSet is 1. This allows a maximum of 1 to be created. 
Value of numShards is 2, value of nrtReplicas is 1, value of tlogReplicas is 0 
and value of pullReplicas is 0. This requires 2 shards to be created (higher 
than the allowed number)
at 
__randomizedtesting.SeedInfo.seed([8780C7E913C59C79:33609B68E5C33C68]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.deleteCollectionOnlyInZk(CollectionsAPIDistributedZkTest.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Updated] (LUCENE-8162) Make it possible to throttle (Tiered)MergePolicy when commit rate is high

2018-02-06 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-8162:

Description: 
As discussed in a recent mailing list thread [1] and observed in a project 
using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to throttle 
the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [2].

That MP doesn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : [http://markmail.org/message/re3ifmq2664bqfjk]

[2] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]

  was:
As discussed in a recent mailing list thread [1] and observed in a project 
using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to throttle 
the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [2].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : http://markmail.org/message/re3ifmq2664bqfjk

[2] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]


> Make it possible to throttle (Tiered)MergePolicy when commit rate is high
> -
>
> Key: LUCENE-8162
> URL: https://issues.apache.org/jira/browse/LUCENE-8162
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Tommaso Teofili
>Priority: Major
> Fix For: trunk
>
>
> As discussed in a recent mailing list thread [1] and observed in a project 
> using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to throttle 
> the aggressiveness of (Tiered)MergePolicy when commit rate is high.
> In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
> implemented [2].
> That MP doesn't merge in case the number of segments is below a certain 
> threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high 
> (e.g. above 1000 doc / sec , 5MB / sec).
> In such impl, the commit rate thresholds adapt to average commit rate by 
> means of single exponential smoothing.
> The results in that specific case looked encouraging as it brought a 5% perf 
> improvement in querying and ~10% reduced IO. However Oak has some specifics 
> which might not fit in other scenarios. Anyway it could be interesting to see 
> how this behaves in plain Lucene scenario.
> [1] : [http://markmail.org/message/re3ifmq2664bqfjk]
> [2] : 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8162) Make it possible to throttle (Tiered)MergePolicy when commit rate is high

2018-02-06 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-8162:

Description: 
As discussed in a recent mailing list thread [1] and observed in a project 
using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to throttle 
the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [2].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : http://markmail.org/message/re3ifmq2664bqfjk

[2] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]

  was:
As discussed in a [recent mailing list 
thread|[http://markmail.org/message/re3ifmq2664bqfjk]] and observed in a 
project using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to 
throttle the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [1].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]


> Make it possible to throttle (Tiered)MergePolicy when commit rate is high
> -
>
> Key: LUCENE-8162
> URL: https://issues.apache.org/jira/browse/LUCENE-8162
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Tommaso Teofili
>Priority: Major
> Fix For: trunk
>
>
> As discussed in a recent mailing list thread [1] and observed in a project 
> using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to throttle 
> the aggressiveness of (Tiered)MergePolicy when commit rate is high.
> In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
> implemented [2].
> That MP didn't merge in case the number of segments is below a certain 
> threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high 
> (e.g. above 1000 doc / sec , 5MB / sec).
> In such impl, the commit rate thresholds adapt to average commit rate by 
> means of single exponential smoothing.
> The results in that specific case looked encouraging as it brought a 5% perf 
> improvement in querying and ~10% reduced IO. However Oak has some specifics 
> which might not fit in other scenarios. Anyway it could be interesting to see 
> how this behaves in plain Lucene scenario.
> [1] : http://markmail.org/message/re3ifmq2664bqfjk
> [2] : 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8162) Make it possible to throttle (Tiered)MergePolicy when commit rate is high

2018-02-06 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-8162:

Description: 
As discussed in a [recent mailing list 
thread|[http://markmail.org/message/re3ifmq2664bqfjk]] and observed in a 
project using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to 
throttle the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [1].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]

  was:
As discussed in a [recent mailing list 
thread|[http://markmail.org/message/re3ifmq2664bqfjk],]] and observed in a 
project using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to 
throttle the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [1].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]


> Make it possible to throttle (Tiered)MergePolicy when commit rate is high
> -
>
> Key: LUCENE-8162
> URL: https://issues.apache.org/jira/browse/LUCENE-8162
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Tommaso Teofili
>Priority: Major
> Fix For: trunk
>
>
> As discussed in a [recent mailing list 
> thread|[http://markmail.org/message/re3ifmq2664bqfjk]] and observed in a 
> project using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to 
> throttle the aggressiveness of (Tiered)MergePolicy when commit rate is high.
> In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
> implemented [1].
> That MP didn't merge in case the number of segments is below a certain 
> threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high 
> (e.g. above 1000 doc / sec , 5MB / sec).
> In such impl, the commit rate thresholds adapt to average commit rate by 
> means of single exponential smoothing.
> The results in that specific case looked encouraging as it brought a 5% perf 
> improvement in querying and ~10% reduced IO. However Oak has some specifics 
> which might not fit in other scenarios. Anyway it could be interesting to see 
> how this behaves in plain Lucene scenario.
> [1] : 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8157) GeoPolygon factory fails in recognize convex polygon

2018-02-06 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353498#comment-16353498
 ] 

Ignacio Vera edited comment on LUCENE-8157 at 2/6/18 8:40 AM:
--

I think what we need is a method that checks coplanarity. For example, if we 
accept that Geo3d coplanarity is defined as coplanar to any of the planes that 
three points can construct:

 
{code:java}
static boolean pointsCoplanar(GeoPoint A, GeoPoint B, GeoPoint C) {
  Plane AB = new Plane(A, B);
  Plane AC = new Plane(A, C);
  Plane BC = new Plane(B, C);
  return AB.evaluateIsZero(C) ||  AC.evaluateIsZero(B) || BC.evaluateIsZero(A);
}{code}
 

In this way we encapsulate what is the definition and can be used whenever we 
check for such a property.


was (Author: ivera):
I think what we need is a method that checks coplanarity:
{code:java}
static boolean pointsCoplanar(GeoPoint A, GeoPoint B, GeoPoint C) {
  Plane AB = new Plane(A, B);
  Plane AC = new Plane(A, C);
  Plane BC = new Plane(B, C);
  return AB.evaluateIsZero(C) ||  AC.evaluateIsZero(B) || BC.evaluateIsZero(A);
}{code}
Then we encapsulate what is the definition and can be used whenever we check 
for such a property. Maye a method in Plane class?

> GeoPolygon factory fails in recognize convex polygon
> 
>
> Key: LUCENE-8157
> URL: https://issues.apache.org/jira/browse/LUCENE-8157
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8157-plane.patch, LUCENE-8157-test.patch, 
> LUCENE-8157.patch
>
>
> When a polygon contains three consecutive points which are nearly co-planar, 
> the polygon factory may fail to recognize the concavity/convexity of the 
> polygon. I think the problem is the way the sideness for a polygon edge is 
> calculated. It relies in the position of the next point in respect of the 
> previous polygon edge which fails on the case explained above because of 
> numerical imprecision. The result is that sideness is messed up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8162) Make it possible to throttle (Tiered)MergePolicy when commit rate is high

2018-02-06 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-8162:

Description: 
As discussed in a [recent mailing list 
thread|[http://markmail.org/message/re3ifmq2664bqfjk],]] and observed in a 
project using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to 
throttle the aggressiveness of (Tiered)MergePolicy when commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [1].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec).

In such impl, the commit rate thresholds adapt to average commit rate by means 
of single exponential smoothing.

The results in that specific case looked encouraging as it brought a 5% perf 
improvement in querying and ~10% reduced IO. However Oak has some specifics 
which might not fit in other scenarios. Anyway it could be interesting to see 
how this behaves in plain Lucene scenario.

[1] : 
[https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]

  was:
As discussed in a [recent mailing list 
thread|[http://markmail.org/message/re3ifmq2664bqfjk|http://markmail.org/message/re3ifmq2664bqfjk],]]
 and observed in a project using Lucene (see OAK-5192 and OAK-6710), it is 
sometimes helpful to throttle the aggressiveness of (Tiered)MergePolicy when 
commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [1].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec). The results in that specific case looked 
encouraging as it brought a 5% perf improvement in querying and ~10% reduced 
IO. However Oak has some specifics which might not fit in other scenarios. 
Anyway it could be interesting to see how this behaves in plain Lucene scenario.

[1] : 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java


> Make it possible to throttle (Tiered)MergePolicy when commit rate is high
> -
>
> Key: LUCENE-8162
> URL: https://issues.apache.org/jira/browse/LUCENE-8162
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Tommaso Teofili
>Priority: Major
> Fix For: trunk
>
>
> As discussed in a [recent mailing list 
> thread|[http://markmail.org/message/re3ifmq2664bqfjk],]] and observed in a 
> project using Lucene (see OAK-5192 and OAK-6710), it is sometimes helpful to 
> throttle the aggressiveness of (Tiered)MergePolicy when commit rate is high.
> In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
> implemented [1].
> That MP didn't merge in case the number of segments is below a certain 
> threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high 
> (e.g. above 1000 doc / sec , 5MB / sec).
> In such impl, the commit rate thresholds adapt to average commit rate by 
> means of single exponential smoothing.
> The results in that specific case looked encouraging as it brought a 5% perf 
> improvement in querying and ~10% reduced IO. However Oak has some specifics 
> which might not fit in other scenarios. Anyway it could be interesting to see 
> how this behaves in plain Lucene scenario.
> [1] : 
> [https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8162) Make it possible to throttle (Tiered)MergePolicy when commit rate is high

2018-02-06 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-8162:
---

 Summary: Make it possible to throttle (Tiered)MergePolicy when 
commit rate is high
 Key: LUCENE-8162
 URL: https://issues.apache.org/jira/browse/LUCENE-8162
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Tommaso Teofili
 Fix For: trunk


As discussed in a [recent mailing list 
thread|[http://markmail.org/message/re3ifmq2664bqfjk|http://markmail.org/message/re3ifmq2664bqfjk],]]
 and observed in a project using Lucene (see OAK-5192 and OAK-6710), it is 
sometimes helpful to throttle the aggressiveness of (Tiered)MergePolicy when 
commit rate is high.

In the case of Apache Jackrabbit Oak a dedicated {{MergePolicy}} was 
implemented [1].

That MP didn't merge in case the number of segments is below a certain 
threshold (e.g. 30) and commit rate (docs per sec and MB per sec) is high (e.g. 
above 1000 doc / sec , 5MB / sec). The results in that specific case looked 
encouraging as it brought a 5% perf improvement in querying and ~10% reduced 
IO. However Oak has some specifics which might not fit in other scenarios. 
Anyway it could be interesting to see how this behaves in plain Lucene scenario.

[1] : 
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-lucene/src/main/java/org/apache/jackrabbit/oak/plugins/index/lucene/writer/CommitMitigatingTieredMergePolicy.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8156) patch-mrjar-classes fails if an old version of ASM is on the Ant classpath

2018-02-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353541#comment-16353541
 ] 

Adrien Grand commented on LUCENE-8156:
--

[~thetaphi] Requiring users to not have ASM on the Ant classpath sounds totally 
reasonable to me if this issue can't be fixed easily. Thanks for adding this 
test, this error message is definitely better, I had to do quite some digging 
to understand what the problem was about. +1 to commit

> patch-mrjar-classes fails if an old version of ASM is on the Ant classpath
> --
>
> Key: LUCENE-8156
> URL: https://issues.apache.org/jira/browse/LUCENE-8156
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: LUCENE-8156.patch, image-2018-02-06-00-00-35-434.png
>
>
> If some optional tasks that depend on an old version of ASM are installed, 
> patching fails with the following error: 
> {{/home/jpountz/src/lucene-solr/lucene/common-build.xml:565: 
> java.lang.IncompatibleClassChangeError: class 
> org.objectweb.asm.commons.ClassRemapper has interface 
> org.objectweb.asm.ClassVisitor as super class}}
> The reason is that ClassRemapper is loaded from the right place, but 
> ClassVisitor, its parent class, is loaded from the parent classpath which may 
> be a different version.
> It is easy to reproduce:
>  - download and extract ant-1.10.1 (latest version)
>  - run {{bin/ant -f fetch.xml -Ddest=system}}, this will add 
> {{lib/asm-2.2.3.jar}} among other files
>  - run {{ant clean test}} at the root of lucene-solr.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8156) patch-mrjar-classes fails if an old version of ASM is on the Ant classpath

2018-02-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353533#comment-16353533
 ] 

Uwe Schindler commented on LUCENE-8156:
---

BTW, Forbiddenapis would have same problem, but it works around the issue by 
having a private shaded ASM in its own JAR.

> patch-mrjar-classes fails if an old version of ASM is on the Ant classpath
> --
>
> Key: LUCENE-8156
> URL: https://issues.apache.org/jira/browse/LUCENE-8156
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: LUCENE-8156.patch, image-2018-02-06-00-00-35-434.png
>
>
> If some optional tasks that depend on an old version of ASM are installed, 
> patching fails with the following error: 
> {{/home/jpountz/src/lucene-solr/lucene/common-build.xml:565: 
> java.lang.IncompatibleClassChangeError: class 
> org.objectweb.asm.commons.ClassRemapper has interface 
> org.objectweb.asm.ClassVisitor as super class}}
> The reason is that ClassRemapper is loaded from the right place, but 
> ClassVisitor, its parent class, is loaded from the parent classpath which may 
> be a different version.
> It is easy to reproduce:
>  - download and extract ant-1.10.1 (latest version)
>  - run {{bin/ant -f fetch.xml -Ddest=system}}, this will add 
> {{lib/asm-2.2.3.jar}} among other files
>  - run {{ant clean test}} at the root of lucene-solr.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org