[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2018-10-15 Thread Mano Kovacs (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651234#comment-16651234
 ] 

Mano Kovacs commented on SOLR-8335:
---

Review would be greatly appreciated!
https://github.com/apache/lucene-solr/pull/471

> HdfsLockFactory does not allow core to come up after a node was killed
> --
>
> Key: SOLR-8335
> URL: https://issues.apache.org/jira/browse/SOLR-8335
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1
>Reporter: Varun Thacker
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-8335.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When using HdfsLockFactory if a node gets killed instead of a graceful 
> shutdown the write.lock file remains in HDFS . The next time you start the 
> node the core doesn't load up because of LockObtainFailedException .
> I was able to reproduce this in all 5.x versions of Solr . The problem wasn't 
> there when I tested it in 4.10.4
> Steps to reproduce this on 5.x
> 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}}
> 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
> -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr 
> -Dsolr.updatelog=hdfs://localhost:9000/solr}}
> 3. Create core: {{./bin/solr create -c test -n data_driven}}
> 4. Kill solr
> 5. The lock file is there in HDFS and is called {{write.lock}}
> 6. Start Solr again and you get a stack trace like this:
> {code}
> 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [   x:test] 
> o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> org.apache.solr.common.SolrException: Index locked for write for core 'test'. 
> Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please 
> verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked 
> for write for core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
> at org.apache.solr.core.SolrCore.(SolrCore.java:761)
> ... 9 more
> 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [   ] 
> o.a.s.c.CoreContainer Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core [test]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [test]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> ... 5 more
> Caused by: org.apache.solr.common.SolrException: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> 

[GitHub] lucene-solr pull request #471: SOLR-8335 HdfsLockFactory should eventually r...

2018-10-15 Thread manokovacs
GitHub user manokovacs opened a pull request:

https://github.com/apache/lucene-solr/pull/471

SOLR-8335 HdfsLockFactory should eventually release index after crash.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manokovacs/lucene-solr SOLR-8335-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/471.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #471


commit c69a183672b392f7e2ba68ef227f076418eeda26
Author: manokovacs 
Date:   2018-09-20T14:54:18Z

SOLR-8335 HdfsLockFactory should eventually release index after crash.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651228#comment-16651228
 ] 

ASF subversion and git services commented on SOLR-12739:


Commit af8e031a61a1ad770e96510113c46622e93c6970 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=af8e031a ]

SOLR-12739: Clear all collections in TestCollectionStateWatchers setup so that 
the collections created by test methods are spread evenly in the cluster.

(cherry picked from commit aa0a5289e692286297762d54434ae726333a5b64)


> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651226#comment-16651226
 ] 

ASF subversion and git services commented on SOLR-12739:


Commit aa0a5289e692286297762d54434ae726333a5b64 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa0a528 ]

SOLR-12739: Clear all collections in TestCollectionStateWatchers setup so that 
the collections created by test methods are spread evenly in the cluster.


> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12740) Deprecate rule based replica placement strategy

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651224#comment-16651224
 ] 

ASF subversion and git services commented on SOLR-12740:


Commit b637737260597f623b2c6da949a5abbdef921360 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b637737 ]

SOLR-12740: migration docs


> Deprecate rule based replica placement strategy
> ---
>
> Key: SOLR-12740
> URL: https://issues.apache.org/jira/browse/SOLR-12740
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12740.patch
>
>
> We should officially mark rule based replica placement strategy as 
> deprecated. This will involve 
> # Creating a ref guide document to help migrate users to the policy rule 
> syntax
> # Return a deprecation warning in create collection API if rules parameter is 
> used



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12740) Deprecate rule based replica placement strategy

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651220#comment-16651220
 ] 

ASF subversion and git services commented on SOLR-12740:


Commit 8d3810df548e1edd88b7b8a68703362b590dca6a in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8d3810d ]

SOLR-12740: migration docs


> Deprecate rule based replica placement strategy
> ---
>
> Key: SOLR-12740
> URL: https://issues.apache.org/jira/browse/SOLR-12740
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12740.patch
>
>
> We should officially mark rule based replica placement strategy as 
> deprecated. This will involve 
> # Creating a ref guide document to help migrate users to the policy rule 
> syntax
> # Return a deprecation warning in create collection API if rules parameter is 
> used



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 345 - Still Failing

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/345/

No tests ran.

Build Log:
[...truncated 23300 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2430 links (1982 relative) to 3170 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.6.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings ::

[jira] [Commented] (SOLR-12806) when strict=false is specified, prioritize node allocation using non strict rules

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651166#comment-16651166
 ] 

ASF subversion and git services commented on SOLR-12806:


Commit 85aae2ec0edf716b0d9bbf9923e3382aa919d8d4 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=85aae2e ]

SOLR-12806: use autoscaling policies with strict=false to prioritize node 
allocation


> when strict=false is specified, prioritize node allocation using non strict 
> rules
> -
>
> Key: SOLR-12806
> URL: https://issues.apache.org/jira/browse/SOLR-12806
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12806.patch
>
>
> if a rule is specified as
> for instance if a policy rule as follows exists
> {code:java}
> {"replica" : "#ALL", "freedisk" : "<500", "strict" : false}
> {code}
>  
> If no no nodes have {{freedisk}} more than 500 GB, Solr ignores this rule 
> completely and assign nodes. Ideally it should still prefer a node with 
> {{freedisk}} of 450GB compared to a node that has {{freedisk}} of 400GB



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12806) when strict=false is specified, prioritize node allocation using non strict rules

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651162#comment-16651162
 ] 

ASF subversion and git services commented on SOLR-12806:


Commit 9c7b8564d8362afa33989d5f7d615868b408a1e6 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c7b856 ]

SOLR-12806: use autoscaling policies with strict=false to prioritize node 
allocation


> when strict=false is specified, prioritize node allocation using non strict 
> rules
> -
>
> Key: SOLR-12806
> URL: https://issues.apache.org/jira/browse/SOLR-12806
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12806.patch
>
>
> if a rule is specified as
> for instance if a policy rule as follows exists
> {code:java}
> {"replica" : "#ALL", "freedisk" : "<500", "strict" : false}
> {code}
>  
> If no no nodes have {{freedisk}} more than 500 GB, Solr ignores this rule 
> completely and assign nodes. Ideally it should still prefer a node with 
> {{freedisk}} of 450GB compared to a node that has {{freedisk}} of 400GB



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 863 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/863/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

14 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at http://127.0.0.1:48283/solr/collection1_shard2_replica_n3: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n3/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:48283/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([7AD3B684BDEC99D9:B8648AECBEAC69A1]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.Abstrac

[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11) - Build # 106 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/106/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC

61 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([325AEE5D3E5CD950]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([325AEE5D3E5CD950]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsea

Re: subscribe

2018-10-15 Thread Zheng Lin Edwin Yeo
Please send email to solr-user-subscr...@lucene.apache.org to subscribe to
the mailing list.. For developer list is dev-subscr...@lucene.apache.org.
Do check more info here: http://lucene.apache.org/solr/community.html

Regards,
Edwin

On Tue, 16 Oct 2018 at 11:44, Karthik Gullapalli 
wrote:

>
>


[jira] [Commented] (SOLR-5005) JavaScriptRequestHandler

2018-10-15 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651090#comment-16651090
 ] 

David Smiley commented on SOLR-5005:


Thanks for the code review offer!  This one's on my long list of things to add 
to Solr – definitely very useful.  I haven't touched it in years so that's the 
status.  At the moment I'm trying to prioritize stuff that'd be important for 
8.0; this is a new feature so isn't something I'm going to take up in the next 
couple months.

> JavaScriptRequestHandler
> 
>
> Key: SOLR-5005
> URL: https://issues.apache.org/jira/browse/SOLR-5005
> Project: Solr
>  Issue Type: New Feature
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-5005.patch, SOLR-5005.patch, SOLR-5005.patch, 
> SOLR-5005_ScriptRequestHandler_take3.patch, 
> SOLR-5005_ScriptRequestHandler_take3.patch, patch
>
>
> A user customizable script based request handler would be very useful.  It's 
> inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
> could write a script that submits searches to Solr (in-VM) and can react to 
> the results of one search before making another that is formulated 
> dynamically.  And it can assemble the response data, potentially reducing 
> both the latency and data that would move over the wire if this feature 
> didn't exist.  It could also be used to easily add a user-specifiable search 
> API at the Solr server with request parameters governed by what the user 
> wants to advertise -- especially useful within enterprises.  And, it could be 
> used to enforce security requirements on allowable parameter valuables to 
> Solr, so a javascript based Solr client could be allowed to talk to only a 
> script based request handler which enforces the rules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



subscribe

2018-10-15 Thread Karthik Gullapalli



subscribe

2018-10-15 Thread Karthik Gullapalli



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-9.0.4) - Build # 107 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/107/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([C933E192E53DE611]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:380)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([C933E192E53DE611]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:380)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:802)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:297)
at jdk.internal.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-repro - Build # 1707 - Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1707/

[...truncated 33 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2879/consoleText

[repro] Revision: 73a413cd85ca03dae69250189b9c6ae24f42801c

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testSimpleCollectionWatch -Dtests.seed=AE4DE51E3D9DA080 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=de-AT 
-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
73a413cd85ca03dae69250189b9c6ae24f42801c
[repro] git fetch
[repro] git checkout 73a413cd85ca03dae69250189b9c6ae24f42801c

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2560 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=AE4DE51E3D9DA080 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=de-AT -Dtests.timezone=America/Indiana/Marengo 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 295 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2452 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=AE4DE51E3D9DA080 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=de-AT -Dtests.timezone=America/Indiana/Marengo 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 2620 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout 73a413cd85ca03dae69250189b9c6ae24f42801c

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-8532) nori analyzer issue with trailing space

2018-10-15 Thread Kiju Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiju Kim updated LUCENE-8532:
-
Description: 
We can reproduce it from Elasticsearch.

When we run the following command:

{{GET _analyze}}

{ 

"analyzer": "nori",

  "text": "공단시"

{{}}}

It returns the following as expected:

{
   "tokens": [
    

{       "token": "공단",       "start_offset": 0,       "end_offset": 2,       
"type": "word",       "position": 0     }

,
    

{       "token": "시",       "start_offset": 2,       "end_offset": 3,       
"type": "word",       "position": 1     }

  ]
 }

But if we run with "공단시 " (with a trailing space)

GET _analyze

{   "analyzer": "nori",   "text": "공단시 " }

It returns

{
   "tokens": [
    

{       "token": "공단",       "start_offset": 0,       "end_offset": 2,       
"type": "word",       "position": 0     }

,
    

{       *"token": "씨",*       "start_offset": 2,       "end_offset": 3,       
"type": "word",       "position": 1     }

  ]
 }

The second token should be "시" instead of  "씨".

  was:
We can reproduce it from Elasticsearch.

When we run the following command:

{{GET _analyze}}

{\{{   }}

{\{  "analyzer": "nori",   }}

{\{  "text": "공단시" }}

{{}}}

It returns the following as expected:

{
   "tokens": [
    {

      "token": "공단",

      "start_offset": 0,

      "end_offset": 2,

      "type": "word",

      "position": 0

    },
    {

      "token": "시",

      "start_offset": 2,

      "end_offset": 3,

      "type": "word",

      "position": 1

    }

  ]
 }

But if we run with "공단시 " (with a trailing space)

GET _analyze

{

  "analyzer": "nori",

  "text": "공단시 "

}

It returns

{
   "tokens": [
    {

      "token": "공단",

      "start_offset": 0,

      "end_offset": 2,

      "type": "word",

      "position": 0

    },
    {

      *"token": "씨",*

      "start_offset": 2,

      "end_offset": 3,

      "type": "word",

      "position": 1

    }

  ]
 }

The second token should be "시" instead of  "씨".


> nori analyzer issue with trailing space
> ---
>
> Key: LUCENE-8532
> URL: https://issues.apache.org/jira/browse/LUCENE-8532
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 7.4
> Environment: Elasticsearch version: Version: Version: 6.4.2, Build: 
> default/tar/04711c2/2018-09-26T13:34:09.098244Z, JVM: 1.8.0_131
> Plugins installed: [analysis-nori]
> JVM version:
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
> OS version: Darwin Kijuui-MacBook-Pro.local 17.7.0 Darwin Kernel Version 
> 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 
> x86_64
>Reporter: Kiju Kim
>Priority: Major
>
> We can reproduce it from Elasticsearch.
> When we run the following command:
> {{GET _analyze}}
> { 
> "analyzer": "nori",
>   "text": "공단시"
> {{}}}
> It returns the following as expected:
> {
>    "tokens": [
>     
> {       "token": "공단",       "start_offset": 0,       "end_offset": 2,       
> "type": "word",       "position": 0     }
> ,
>     
> {       "token": "시",       "start_offset": 2,       "end_offset": 3,       
> "type": "word",       "position": 1     }
>   ]
>  }
> But if we run with "공단시 " (with a trailing space)
> GET _analyze
> {   "analyzer": "nori",   "text": "공단시 " }
> It returns
> {
>    "tokens": [
>     
> {       "token": "공단",       "start_offset": 0,       "end_offset": 2,       
> "type": "word",       "position": 0     }
> ,
>     
> {       *"token": "씨",*       "start_offset": 2,       "end_offset": 3,       
> "type": "word",       "position": 1     }
>   ]
>  }
> The second token should be "시" instead of  "씨".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8532) nori analyzer issue with trailing space

2018-10-15 Thread Kiju Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiju Kim updated LUCENE-8532:
-
Description: 
We can reproduce it from Elasticsearch.

When we run the following command:

{{GET _analyze}}

{\{{   }}

{\{  "analyzer": "nori",   }}

{\{  "text": "공단시" }}

{{}}}

It returns the following as expected:

{
   "tokens": [
    {

      "token": "공단",

      "start_offset": 0,

      "end_offset": 2,

      "type": "word",

      "position": 0

    },
    {

      "token": "시",

      "start_offset": 2,

      "end_offset": 3,

      "type": "word",

      "position": 1

    }

  ]
 }

But if we run with "공단시 " (with a trailing space)

GET _analyze

{

  "analyzer": "nori",

  "text": "공단시 "

}

It returns

{
   "tokens": [
    {

      "token": "공단",

      "start_offset": 0,

      "end_offset": 2,

      "type": "word",

      "position": 0

    },
    {

      *"token": "씨",*

      "start_offset": 2,

      "end_offset": 3,

      "type": "word",

      "position": 1

    }

  ]
 }

The second token should be "시" instead of  "씨".

  was:
We can reproduce it from Elasticsearch.

When we run the following command:

GET _analyze
{
  "analyzer": "nori",
  "text": "공단시"
}

It returns the following as expected:

{
  "tokens": [
    {
      "token": "공단",
      "start_offset": 0,
      "end_offset": 2,
      "type": "word",
      "position": 0
    },
    {
      "token": "시",
      "start_offset": 2,
      "end_offset": 3,
      "type": "word",
      "position": 1
    }
  ]
}

But if we run with "공단시 " (with a trailing space)

GET _analyze
{
  "analyzer": "nori",
  "text": "공단시 "
}

It returns

{
  "tokens": [
    {
      "token": "공단",
      "start_offset": 0,
      "end_offset": 2,
      "type": "word",
      "position": 0
    },
    {
      *"token": "씨",*
      "start_offset": 2,
      "end_offset": 3,
      "type": "word",
      "position": 1
    }
  ]
}

The second token should be " 시" instead of  "씨".


> nori analyzer issue with trailing space
> ---
>
> Key: LUCENE-8532
> URL: https://issues.apache.org/jira/browse/LUCENE-8532
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 7.4
> Environment: Elasticsearch version: Version: Version: 6.4.2, Build: 
> default/tar/04711c2/2018-09-26T13:34:09.098244Z, JVM: 1.8.0_131
> Plugins installed: [analysis-nori]
> JVM version:
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
> OS version: Darwin Kijuui-MacBook-Pro.local 17.7.0 Darwin Kernel Version 
> 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 
> x86_64
>Reporter: Kiju Kim
>Priority: Major
>
> We can reproduce it from Elasticsearch.
> When we run the following command:
> {{GET _analyze}}
> {\{{   }}
> {\{  "analyzer": "nori",   }}
> {\{  "text": "공단시" }}
> {{}}}
> It returns the following as expected:
> {
>    "tokens": [
>     {
>       "token": "공단",
>       "start_offset": 0,
>       "end_offset": 2,
>       "type": "word",
>       "position": 0
>     },
>     {
>       "token": "시",
>       "start_offset": 2,
>       "end_offset": 3,
>       "type": "word",
>       "position": 1
>     }
>   ]
>  }
> But if we run with "공단시 " (with a trailing space)
> GET _analyze
> {
>   "analyzer": "nori",
>   "text": "공단시 "
> }
> It returns
> {
>    "tokens": [
>     {
>       "token": "공단",
>       "start_offset": 0,
>       "end_offset": 2,
>       "type": "word",
>       "position": 0
>     },
>     {
>       *"token": "씨",*
>       "start_offset": 2,
>       "end_offset": 3,
>       "type": "word",
>       "position": 1
>     }
>   ]
>  }
> The second token should be "시" instead of  "씨".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8532) nori analyzer issue with trailing space

2018-10-15 Thread Kiju Kim (JIRA)
Kiju Kim created LUCENE-8532:


 Summary: nori analyzer issue with trailing space
 Key: LUCENE-8532
 URL: https://issues.apache.org/jira/browse/LUCENE-8532
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 7.4
 Environment: Elasticsearch version: Version: Version: 6.4.2, Build: 
default/tar/04711c2/2018-09-26T13:34:09.098244Z, JVM: 1.8.0_131

Plugins installed: [analysis-nori]

JVM version:
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)


OS version: Darwin Kijuui-MacBook-Pro.local 17.7.0 Darwin Kernel Version 
17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 x86_64
Reporter: Kiju Kim


We can reproduce it from Elasticsearch.

When we run the following command:

GET _analyze
{
  "analyzer": "nori",
  "text": "공단시"
}

It returns the following as expected:

{
  "tokens": [
    {
      "token": "공단",
      "start_offset": 0,
      "end_offset": 2,
      "type": "word",
      "position": 0
    },
    {
      "token": "시",
      "start_offset": 2,
      "end_offset": 3,
      "type": "word",
      "position": 1
    }
  ]
}

But if we run with "공단시 " (with a trailing space)

GET _analyze
{
  "analyzer": "nori",
  "text": "공단시 "
}

It returns

{
  "tokens": [
    {
      "token": "공단",
      "start_offset": 0,
      "end_offset": 2,
      "type": "word",
      "position": 0
    },
    {
      *"token": "씨",*
      "start_offset": 2,
      "end_offset": 3,
      "type": "word",
      "position": 1
    }
  ]
}

The second token should be " 시" instead of  "씨".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 886 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/886/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure

Error Message:
Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$117/783933817@401ee6d2
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@7a2fb087[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$117/783933817@401ee6d2
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@7a2fb087[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]
at 
__randomizedtesting.SeedInfo.seed([D27969BD6D3111F3:5A4FCBEEB59EF9E1]:0)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2104)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:848)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1397)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.waitInBackground(TestCollectionStateWatchers.java:74)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure(TestCollectionStateWatchers.java:241)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedt

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2920 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2920/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC

16 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWatcherIsRemovedAfterTimeout

Error Message:
Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$121/1462461575@282cae7c
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@ffa4499[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$121/1462461575@282cae7c
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@ffa4499[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]
at 
__randomizedtesting.SeedInfo.seed([5F4639FF4E1F7FAF:7ECC1CCF90EF5695]:0)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2104)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:848)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1397)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.waitFor(TestCollectionStateWatchers.java:86)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWatcherIsRemovedAfterTimeout(TestCollectionStateWatchers.java:270)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Statement

[jira] [Commented] (SOLR-12870) Use StandardCharsets instead of String values

2018-10-15 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651015#comment-16651015
 ] 

Lucene/Solr QA commented on SOLR-12870:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  3m 17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 44s{color} 
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 20s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.MultiThreadedOCPTest |
|   | solr.common.cloud.TestCollectionStateWatchers |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943988/SOLR-12870.master.1.patch
 |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 73a413c |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/202/artifact/out/patch-unit-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/202/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/202/testReport/ |
| modules | C: solr solr/core solr/solrj solr/test-framework U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/202/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Use StandardCharsets instead of String values
> -
>
> Key: SOLR-12870
> URL: https://issues.apache.org/jira/browse/SOLR-12870
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Somogyi
>Priority: Minor
> Attachments: SOLR-12870.master.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Charsets are hardcoded in some places around the codebase which is 
> error-prone.
> Moving to StandardCharsets also has the benefit of dropping the try-catch 
> block caused by UnsupportedEncodingException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection

2018-10-15 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651013#comment-16651013
 ] 

David Smiley commented on SOLR-12357:
-

Good catch – yes!  I should just commit this simple change.

> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12357.patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1705 - Still Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1705/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1153/consoleText

[repro] Revision: d7fd82c0f8517251d67b0af021d259dffaa4dce6

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testSimpleCollectionWatch -Dtests.seed=BB38DABB11EA35DD 
-Dtests.multiplier=2 -Dtests.locale=fr-GF -Dtests.timezone=Europe/Vienna 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
73a413cd85ca03dae69250189b9c6ae24f42801c
[repro] git fetch
[repro] git checkout d7fd82c0f8517251d67b0af021d259dffaa4dce6

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2560 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=BB38DABB11EA35DD -Dtests.multiplier=2 -Dtests.locale=fr-GF 
-Dtests.timezone=Europe/Vienna -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 307 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2452 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=BB38DABB11EA35DD -Dtests.multiplier=2 -Dtests.locale=fr-GF 
-Dtests.timezone=Europe/Vienna -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 1022 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout 73a413cd85ca03dae69250189b9c6ae24f42801c

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 955 - Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/955/

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch

Error Message:
CollectionStateWatcher was never notified of cluster change

Stack Trace:
java.lang.AssertionError: CollectionStateWatcher was never notified of cluster 
change
at 
__randomizedtesting.SeedInfo.seed([F0E3527D24723158:ADD89D0D637FAE66]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16473 lines...]
   [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J2/temp/s

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23035 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23035/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([6CE89C325CE00CC8:3B59D9899C1CF3D9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:185)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.jav

[jira] [Commented] (SOLR-12874) Java 9+ GC Log files are being rotated every 20KB instead of every 20MB

2018-10-15 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650964#comment-16650964
 ] 

Tim Underwood commented on SOLR-12874:
--

[~elyograg] The options are already broken out by java version.  20M is 
currently used for Java 8.  Here is the current code:
   
{noformat}
# if verbose gc logging enabled, setup the location of the log file and rotation
if [ "$GC_LOG_OPTS" != "" ]; then
  if [[ "$JAVA_VER_NUM" -lt "9" ]] ; then
gc_log_flag="-Xloggc"
if [ "$JAVA_VENDOR" == "IBM J9" ]; then
  gc_log_flag="-Xverbosegclog"
fi
GC_LOG_OPTS+=("$gc_log_flag:$SOLR_LOGS_DIR/solr_gc.log" 
'-XX:+UseGCLogFileRotation' '-XX:NumberOfGCLogFiles=9' '-XX:GCLogFileSize=20M')
  else
# http://openjdk.java.net/jeps/158
for i in "${!GC_LOG_OPTS[@]}";
do
  # for simplicity, we only look at the prefix '-Xlog:gc'
  # (if 'all' or multiple tags are used starting with anything other then 
'gc' the user is on their own)
  # if a single additional ':' exists in param, then there is already an 
explicit output specifier
  GC_LOG_OPTS[$i]=$(echo ${GC_LOG_OPTS[$i]} | sed 
"s|^\(-Xlog:gc[^:]*$\)|\1:file=$SOLR_LOGS_DIR/solr_gc.log:time,uptime:filecount=9,filesize=2|")
done
  fi
fi
{noformat}
Java 8 (and anything less than 9) is using:
{noformat}
-XX:GCLogFileSize=20M
{noformat}

> Java 9+ GC Log files are being rotated every 20KB instead of every 20MB
> ---
>
> Key: SOLR-12874
> URL: https://issues.apache.org/jira/browse/SOLR-12874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Java 9+ GC logging options in bin/solr and bin/solr.cmd specify a log 
> rotation file size of 2 which according to JEP 158 
> ([https://openjdk.java.net/jeps/158]) should be the "file size in kb" however 
> when running Solr on Java 11 I'm seeing GC logs rotated every 20KB.
> Changing "filesize=2" to "filesize=20M" fixes the problem for me under 
> Linux.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-15 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650927#comment-16650927
 ] 

Lucene/Solr QA commented on SOLR-12243:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 31m 
48s{color} | {color:green} core in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 45s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.MultiThreadedOCPTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12243 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921951/SOLR-12243.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 73a413c |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/200/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/200/testReport/ |
| modules | C: lucene/core solr/core U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/200/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> request handler:
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
>  
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1704 - Still Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1704/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/187/consoleText

[repro] Revision: 69584f413021e253e63120172e7304f1d7ddacd7

[repro] Repro line:  ant test  -Dtestcase=LeaderFailoverAfterPartitionTest 
-Dtests.method=test -Dtests.seed=25C5DFA187B9D151 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
-Dtests.timezone=America/Atikokan -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestSimPolicyCloud 
-Dtests.method=testCreateCollectionAddReplica -Dtests.seed=25C5DFA187B9D151 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ja 
-Dtests.timezone=Asia/Jerusalem -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CollectionsAPIAsyncDistributedZkTest 
-Dtests.method=testAsyncRequests -Dtests.seed=25C5DFA187B9D151 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hu 
-Dtests.timezone=Asia/Riyadh -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testSimpleCollectionWatch -Dtests.seed=20CD6D3A112FFBF9 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hu 
-Dtests.timezone=Pacific/Fakaofo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SolrCollectorTest 
-Dtests.seed=471B8C2321BCEE5F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sr-CS -Dtests.timezone=America/Tortola 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
73a413cd85ca03dae69250189b9c6ae24f42801c
[repro] git fetch
[repro] git checkout 69584f413021e253e63120172e7304f1d7ddacd7

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/contrib/prometheus-exporter
[repro]   SolrCollectorTest
[repro]solr/core
[repro]   LeaderFailoverAfterPartitionTest
[repro]   CollectionsAPIAsyncDistributedZkTest
[repro]   TestSimPolicyCloud
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2691 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SolrCollectorTest" -Dtests.showOutput=onerror  
-Dtests.seed=471B8C2321BCEE5F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sr-CS -Dtests.timezone=America/Tortola 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 73 lines...]
[repro] ant compile-test

[...truncated 1352 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.LeaderFailoverAfterPartitionTest|*.CollectionsAPIAsyncDistributedZkTest|*.TestSimPolicyCloud"
 -Dtests.showOutput=onerror  -Dtests.seed=25C5DFA187B9D151 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
-Dtests.timezone=America/Atikokan -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 157 lines...]
[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=20CD6D3A112FFBF9 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=hu -Dtests.timezone=Pacific/Fakaofo 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 299 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud
[repro]   0/5 failed: org.apache.solr.prometheus.collector.SolrCollectorTest
[repro]   5/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2468 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=20CD6D3A112FFBF9 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=hu -Dtests.timezone=Pacific/Fakaofo 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 1862 lines...]
[repro] Setting last failure code to 2

[jira] [Commented] (LUCENE-6327) ArrayIndexOutOfBoundsException in reading a lucene block

2018-10-15 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-6327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650924#comment-16650924
 ] 

Kevin Risden commented on LUCENE-6327:
--

[~vamsee] - Was this index on HDFS? Any chance you were able to find the root 
cause for this?

> ArrayIndexOutOfBoundsException in reading a lucene block
> 
>
> Key: LUCENE-6327
> URL: https://issues.apache.org/jira/browse/LUCENE-6327
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Vamsee Yarlagadda
>Priority: Minor
>
> I notice this error while trying to do heavy indexing in Solr. This error was 
> seen in testing Cloudera Search (Solr 4.4 with lots of SolrCloud, other 
> critical bug fixes)
> {code}
> 2015-02-27 04:21:46,644 INFO org.apache.solr.core.SolrCore.Request: 
> [crunch_sequence_collection_shard2_replica5] webapp=/solr path=/update 
> params={distrib.from=http://search-15.vpc.cloudera.com:8983/solr/crunch_sequence_collection_shard2_replica2/&update.distrib=FROMLEADER&wt=javabin&version=2}
>  status=0 QTime=246 
> 2015-02-27 04:21:46,773 ERROR org.apache.solr.core.SolrCore: 
> java.lang.ArrayIndexOutOfBoundsException
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.fillTerm(BlockTreeTermsReader.java:2934)
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.scanToTermLeaf(BlockTreeTermsReader.java:2743)
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.scanToTerm(BlockTreeTermsReader.java:2662)
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.seekExact(BlockTreeTermsReader.java:1695)
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(SolrIndexSearcher.java:746)
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(VersionInfo.java:193)
>   at org.apache.solr.update.UpdateLog.lookupVersion(UpdateLog.java:739)
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(VersionInfo.java:183)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:716)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:459)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1953)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:766)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter$2.doFilter(SolrHadoopAuthenticationFilter.java:272)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:277)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:555)
>   at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.doFilter(SolrHadoopAuthenticationFilter.java:277)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>   at 
> org.apache.catalin

[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-15 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650915#comment-16650915
 ] 

Lucene/Solr QA commented on SOLR-5004:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 34s{color} 
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m  0s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.DeleteReplicaTest |
|   | solr.common.cloud.TestCollectionStateWatchers |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-5004 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943970/SOLR-5004.02.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 73a413c |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/201/artifact/out/patch-unit-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/201/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/201/testReport/ |
| modules | C: solr/core solr/solrj U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/201/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, SOLR-5004.patch, 
> SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12874) Java 9+ GC Log files are being rotated every 20KB instead of every 20MB

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650900#comment-16650900
 ] 

Shawn Heisey commented on SOLR-12874:
-

Is 20M properly interpreted by Java 8?  If not, we'll need different options 
for different java versions.

Sounds like there needs to be a bug filed against Java, because all of Oracle's 
documentation that I've been able to locate indicates that the number should be 
interpreted as kilobytes if a unit is not provided.

> Java 9+ GC Log files are being rotated every 20KB instead of every 20MB
> ---
>
> Key: SOLR-12874
> URL: https://issues.apache.org/jira/browse/SOLR-12874
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Java 9+ GC logging options in bin/solr and bin/solr.cmd specify a log 
> rotation file size of 2 which according to JEP 158 
> ([https://openjdk.java.net/jeps/158]) should be the "file size in kb" however 
> when running Solr on Java 11 I'm seeing GC logs rotated every 20KB.
> Changing "filesize=2" to "filesize=20M" fixes the problem for me under 
> Linux.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-11) - Build # 838 - Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/838/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

22 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Could not find collection : delLiveColl

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : delLiveColl
at 
__randomizedtesting.SeedInfo.seed([502D77DFA0AA2692:FD4DC3D4BD958EE7]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:77)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Could not find collection : delLiveColl

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : de

[jira] [Commented] (SOLR-12799) Allow Authentication Plugins to easily intercept internode requests

2018-10-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650863#comment-16650863
 ] 

Jan Høydahl commented on SOLR-12799:


Anything that will simplify the flow is welcome. In my experience there was a 
need for the explicit principal copy.

I'm going to commit this to master on Thursday if there are no other comments.

> Allow Authentication Plugins to easily intercept internode requests
> ---
>
> Key: SOLR-12799
> URL: https://issues.apache.org/jira/browse/SOLR-12799
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr security framework currently allows a plugin to declare statically by 
> implementing the {{HttpClientBuilderPlugin}} interface whether it will handle 
> internode requests. If it implements the interface, the plugin MUST handle 
> ALL internode requests, even requests originating from Solr itself. Likewise, 
> if a plugin does not implement the interface, ALL requests will be 
> authenticated by the built-in {{PKIAuthenticationPlugin}}.
> In some cases (such as SOLR-12121) there is a need to forward end-user 
> credentials on internode requests, but let PKI handle it for solr-originated 
> requests. This is currently not possible without a dirty hack where each 
> plugin duplicates some PKI logic and calls PKI plugin from its own 
> interceptor even if it is disabled.
> This Jira makes this use case officially supported by the framework by:
>  * Letting {{PKIAuthenticationPlugin}} be always enabled. PKI will now in its 
> interceptor on a per-request basis first give the authc plugin a chance to 
> handle the request
>  * Adding a protected method to abstract class {{AuthenticationPlugin}}
>{code:java}
> protected boolean interceptInternodeRequest(HttpRequest httpRequest, 
> HttpContext httpContext)
> {code}
> that can be overridden by plugins in order to easily intercept requests 
> without registering its own interceptor. Returning 'false' delegates to PKI.
> Existing Authc plugins do *not* need to change as a result of this, and they 
> will work exactly as before, i.e. either handle ALL or NONE internode auth.
> New plugins choosing to *override* the new {{interceptInternodeRequest}} 
> method will obtain per-request control over who will secure each request. The 
> first user of this feature will be JWT token based auth in SOLR-12121.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12791) Add Metrics reporting for AuthenticationPlugin

2018-10-15 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650858#comment-16650858
 ] 

Jan Høydahl commented on SOLR-12791:


Unless I get more feedback I'll commit this on Thursday.

> Add Metrics reporting for AuthenticationPlugin
> --
>
> Key: SOLR-12791
> URL: https://issues.apache.org/jira/browse/SOLR-12791
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Propose to add Metrics support for all Auth plugins. Will let abstract 
> {{AuthenticationPlugin}} base class implement {{SolrMetricProducer}} and keep 
> the counters, such as:
>  * requests
>  * req authenticated
>  * req pass-through (no credentials and blockUnknown false)
>  * req with auth failures due to wrong or malformed credentials
>  * req auth failures due to missing credentials
>  * errors
>  * timeouts
>  * timing stats
> Each implementation still needs to increment the counters etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2919 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2919/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC

53 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 1) 
Thread[id=11868, name=TRA-preemptive-creation-2407-thread-2, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 
   1) Thread[id=11868, name=TRA-preemptive-creation-2407-thread-2, 
state=WAITING, group=TGRP-TimeRoutedAliasUpdateProcessorTest]
at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([AD662349B4C0F574]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=11868, name=TRA-preemptive-creation-2407-thread-2, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=11868, name=TRA-preemptive-creation-2407-thread-2, 
state=WAITING, group=TGRP-TimeRoutedAliasUpdateProcessorTest]
at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([AD662349B4C0F574]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest

Error Message:
20 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) 
Thread[id=936, name=Connection evictor, state=TIMED_WA

[JENKINS] Lucene-Solr-Tests-master - Build # 2879 - Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2879/

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch

Error Message:
CollectionStateWatcher was never notified of cluster change

Stack Trace:
java.lang.AssertionError: CollectionStateWatcher was never notified of cluster 
change
at 
__randomizedtesting.SeedInfo.seed([AE4DE51E3D9DA080:F3762A6E7A903FBE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 15981 lines...]
   [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers
   [junit4]   2> ERROR StatusLogger No Log4j 2 configuration file found. Using 
default configuration (logging only errors to 

[jira] [Comment Edited] (SOLR-12019) Prepare Streaming Expressions for machine learning functions

2018-10-15 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650730#comment-16650730
 ] 

Joel Bernstein edited comment on SOLR-12019 at 10/15/18 9:40 PM:
-

I'm very close to starting work on the Smile integration. I'll be taking the 
same approach as with Apache Commons Math which is to add new functions with 
every release. So it will be a gradual ramp up. The big question is what to do 
first? I was thinking of adding some of the regression algorithms that are not 
in Apache Commons Math, such as Lasso and Ridge. Then adding classifiers (SVM, 
NaiveBayes etc...). I'm also interested in the plotting.


was (Author: joel.bernstein):
I'm very close to starting work the Smile integration. I'll be taking the same 
approach as with Apache Commons Math which is to add new functions with every 
release. So it will be a gradual ramp up. The big question is what to do first? 
I was thinking of adding some of the regression algorithms that are not in 
Apache Commons Math, such as Lasso and Ridge. Then adding classifiers (SVM, 
NaiveBayes etc...). I'm also interested in the plotting.

> Prepare Streaming Expressions for machine learning functions
> 
>
> Key: SOLR-12019
> URL: https://issues.apache.org/jira/browse/SOLR-12019
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> This ticket is to prepare the Streaming Expressions framework for the next 
> phase of development which will focus on *machine learning*.
> Because this next phase will involve a large number of new functions it will 
> be important to prepare the Streaming Expressions framework before getting 
> started.
> There are three main goals of the ticket:
> 1) Refactoring of code and test cases to prepare for the new machine learning 
> functions.
> 2) Improve the documentation of the current statistical functions and 
> refactor the docs so they can support the new machine learning functions.
> 3) Integrate the [http://haifengl.github.io/smile/] libraries. Now that the 
> *Apache Commons Math* integration is close to completion its time to start on 
> the *Smile* machine learning integration.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-15 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650813#comment-16650813
 ] 

Jim Ferenczi commented on LUCENE-8531:
--

(Multi)PhraseQuery-s allows some reordering but the semantic is different from 
an unordered span near query.
I don't think we can respect the slop correctly if we continue to use span 
queries here. We switched to span queries to avoid searching duplicate terms in 
multiple phrase queries but I agree that the behavior is not consistent when 
using a slop. Maybe we could switch to the old method of building one phrase 
query per path if a slop is used ? This way we could apply the slop to each 
phrase query independently. This is more costly than the span method but it 
would be semantically correct. 

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: LSH/MinHash

2018-10-15 Thread Tommaso Teofili
Hi Andy,

It would be very nice if you could do that and I'd be very interested
in reviewing and helping out with the patch.
I have been using that filter for a while with my own query bits; a
full fledged query parser would surely be a very useful contribution.

Regards,
Tommaso
Il giorno lun 15 ott 2018 alle ore 22:38 Andy Hind
 ha scritto:
>
> Hi All
>
> Following on from https://issues.apache.org/jira/browse/LUCENE-6968 (I know 
> it’s been a while…)
> I have a QParser plugin that can generate the appropriate banded queries for 
> Jaccard similarity.
>
> It covers the same functionality that was proposed in the original issue but 
> wrapped up as a query parser.
> There are two analysis cases and two query cases.. Hashes generated by 
> tokenisation or those generated by pre-analysis. Queries based on text or 
> provided hash values.
>
> If there is interest, I will create the issue and put up the patch.
>
> Regards
>
> Andy
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets

2018-10-15 Thread Tim Underwood (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Underwood updated SOLR-12875:
-
Description: 
I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of
{noformat}
uniqueBlock(_root_){noformat}
within JSON Facets.

Here are some example Stack Traces:
{noformat}
2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [   x:my_core] 
o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 
out of bounds for length 8
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)
at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)
at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395)
at 
org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
{noformat}
 

Here is another one at a different location in UniqueBlockAgg:
  
{noformat}
2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [   x:my_core] 
o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 
out of bounds for length 16
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59)
at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146)
at 
org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431)
at 
org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
{noformat}
 

 

 

 

  was:
I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of {noformat}uniqueBlock(_root_){noformat} 
within JSON Facets.

Here are some example Stack Traces:
{noformat}
2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [   x:opticat] 
o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 
out of bounds for length 8
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)
at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)
at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)
at 
org.apache.solr.search.f

[jira] [Updated] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets

2018-10-15 Thread Tim Underwood (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Underwood updated SOLR-12875:
-
Description: 
I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of {noformat}uniqueBlock(_root_){noformat} 
within JSON Facets.

Here are some example Stack Traces:
{noformat}
2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [   x:opticat] 
o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 
out of bounds for length 8
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)
at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)
at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395)
at 
org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
{noformat}
 

Here is another one at a different location in UniqueBlockAgg:
  
{noformat}
2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [   x:opticat] 
o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 
out of bounds for length 16
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59)
at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146)
at 
org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431)
at 
org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
{noformat}
 

 

 

 

  was:
I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of uniqueBlock(_root_) within JSON Facets.

Here are some example Stack Traces:
{noformat}
2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [   x:opticat] 
o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 
out of bounds for length 8
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)
at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)
at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)
at 
org.apache.solr.search.facet.FacetFieldProces

[jira] [Updated] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets

2018-10-15 Thread Tim Underwood (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Underwood updated SOLR-12875:
-
Description: 
I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of uniqueBlock(_root_) within JSON Facets.

Here are some example Stack Traces:
{noformat}
2018-10-12 14:08:50.587 ERROR (qtp215078753-3353) [   x:opticat] 
o.a.s.s.HttpSolrCall null:java.lang.ArrayIndexOutOfBoundsException: Index 13 
out of bounds for length 8
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)
at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)
at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395)
at 
org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
{noformat}
 

Here is another one at a different location in UniqueBlockAgg:
  
{noformat}
2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [   x:opticat] 
o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 
out of bounds for length 16
at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59)
at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146)
at 
org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431)
at 
org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249)
at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
at 
org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)
at 
org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
{noformat}
 

 

 

 

  was:
I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of uniqueBlock(_root_) within JSON Facets.

Here are some example Stack Traces:

 

{{2018-10-11 17:28:41.887 ERROR (qtp215078753-1377) [ x:opticat] 
o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 6 
out of bounds for length 4}}
{{ at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)}}
{{ at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorB

[jira] [Created] (SOLR-12875) ArrayIndexOutOfBoundsException when using uniqueBlock(_root_) in JSON Facets

2018-10-15 Thread Tim Underwood (JIRA)
Tim Underwood created SOLR-12875:


 Summary: ArrayIndexOutOfBoundsException when using 
uniqueBlock(_root_) in JSON Facets
 Key: SOLR-12875
 URL: https://issues.apache.org/jira/browse/SOLR-12875
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Affects Versions: 7.5
Reporter: Tim Underwood


I'm seeing java.lang.ArrayIndexOutOfBoundsException exceptions for some 
requests when trying to make use of uniqueBlock(_root_) within JSON Facets.

Here are some example Stack Traces:

 

{{2018-10-11 17:28:41.887 ERROR (qtp215078753-1377) [ x:opticat] 
o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 6 
out of bounds for length 4}}
{{ at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.collectOrdToSlot(UniqueBlockAgg.java:40)}}
{{ at 
org.apache.solr.search.facet.UniqueSinglevaluedSlotAcc.collect(UniqueSinglevaluedSlotAcc.java:85)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessor.collectFirstPhase(FacetFieldProcessor.java:243)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectValFirstPhase(FacetFieldProcessorByHashDV.java:432)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.access$100(FacetFieldProcessorByHashDV.java:50)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV$5.collect(FacetFieldProcessorByHashDV.java:395)}}
{{ at 
org.apache.solr.search.DocSetUtil.collectSortedDocSet(DocSetUtil.java:284)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.collectDocs(FacetFieldProcessorByHashDV.java:376)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:247)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)}}
{{ at org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)}}
{{ at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)}}
{{ at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)}}
{{ at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)}}
{{ at org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)}}
{{ at org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)}}
{{ at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)}}

 

Here is another one at a different location in UniqueBlockAgg:

 

{{2018-10-12 21:37:57.322 ERROR (qtp215078753-4072) [ x:opticat] 
o.a.s.h.RequestHandlerBase java.lang.ArrayIndexOutOfBoundsException: Index 23 
out of bounds for length 16}}
{{ at 
org.apache.solr.search.facet.UniqueBlockAgg$UniqueBlockSlotAcc.getValue(UniqueBlockAgg.java:59)}}
{{ at org.apache.solr.search.facet.SlotAcc.setValues(SlotAcc.java:146)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessor.fillBucket(FacetFieldProcessor.java:431)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:381)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.calcFacets(FacetFieldProcessorByHashDV.java:249)}}
{{ at 
org.apache.solr.search.facet.FacetFieldProcessorByHashDV.process(FacetFieldProcessorByHashDV.java:214)}}
{{ at org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)}}
{{ at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)}}
{{ at 
org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)}}
{{ at 
org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)}}
{{ at org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:368)}}
{{ at org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)}}
{{ at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)}}
{{ at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)}}

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-15 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650777#comment-16650777
 ] 

Elizabeth Haubert commented on SOLR-12243:
--

After talking to Steve Rowe, there is a new issue LUCENE-8531 to handle the 
current Span requirement that sloppy queries be inOrder.

Since the core issue with this ticket was that the span clauses prevents 
pf/pf2/pf3 from being generated entirely, then it seems prudent to back out the 
lucene change under this ticket, and add a new one to pick up the reordering 
when there is a patch to LUCENE-8531.

Updated patch pending shortly.

[~alessandro.benedetti], I will include the query expansion test you added, but 
update it to reflect that the inOrder=false will become inOrder=true.   I'm not 
sure what the right way to coordinate that with the pull request.

 

 

 

 

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> request handler:
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
>  
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 182 - Still Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/182/

5 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.test

Error Message:
Could not load collection from ZK: onenodecollection

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
onenodecollection
at 
__randomizedtesting.SeedInfo.seed([928B5117182AE5C6:1ADF6ECDB6D6883E]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1321)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:737)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:148)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:131)
at 
org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:110)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForCollection(AbstractFullDistribZkTestBase.java:353)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.testNodeWithoutCollectionForwarding(BasicDistributedZk2Test.java:164)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.test(BasicDistributedZk2Test.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleA

[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-15 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650768#comment-16650768
 ] 

Steve Rowe commented on LUCENE-8531:


CC [~thetaphi‍]

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10981) Allow update to load gzip files

2018-10-15 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650765#comment-16650765
 ] 

David Smiley commented on SOLR-10981:
-

This is looking very good Andrew!  Thanks for your patience.  I applied the 
patch locally and ran tests and poked around. I only made slight edits and did 
some reformatting. I like that the test was revamped to use try-with-resources.

I'm inclined to alter the ref-guide change. I don't think it's worthy of a 
header. I edited the ref guide notes as follows:
{code:java}
The source of the data can be compressed using gzip, and Solr will generally 
detect this.
The detection is based on either the presence of a `Content-Encoding: gzip` 
HTTP header or the file ending with .gz or .gzip.
Gzip doesn't apply to `stream.body`.{code}
WDYT?  CHANGES.txt will credit you first, then Jan and I.

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>Assignee: David Smiley
>Priority: Major
>  Labels: patch
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, 
> SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch
>
>
> We currently import large CSV files. We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them. After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-15 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-8531:
--

 Summary: QueryBuilder hard-codes inOrder=true for generated sloppy 
span near queries
 Key: LUCENE-8531
 URL: https://issues.apache.org/jira/browse/LUCENE-8531
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Reporter: Steve Rowe
Assignee: Steve Rowe


QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
phraseSlop, but hard-codes inOrder ctor param as true.

Before multi-term synonym support and graph token streams introduced the 
possibility of generating SpanNearQuery-s, QueryBuilder generated 
(Multi)PhraseQuery-s, which always interpret slop as allowing reordering edits. 
 Solr's eDismax query parser generates phrase queries when its pf/pf2/pf3 
params are specified, and when multi-term synonyms are used with a graph-aware 
synonym filter, SpanNearQuery-s are generated that require clauses to be in 
order; unlike with (Multi)PhraseQuery-s, reordering edits are not allowed, so 
this is a kind of regression.  See SOLR-12243 for edismax pf/pf2/pf3 context.  
(Note that the patch on SOLR-12243 also addresses another problem that blocks 
eDismax from generating queries *at all* under the above-described 
circumstances.)

I propose adding a new analyzeGraphPhrase() method that allows configuration of 
inOrder, which would allow eDismax to specify inOrder=false.  The existing 
analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



LSH/MinHash

2018-10-15 Thread Andy Hind
Hi All

Following on from https://issues.apache.org/jira/browse/LUCENE-6968 
 (I know it’s been a while…)
I have a QParser plugin that can generate the appropriate banded queries for 
Jaccard similarity.

It covers the same functionality that was proposed in the original issue but 
wrapped up as a query parser.
There are two analysis cases and two query cases.. Hashes generated by 
tokenisation or those generated by pre-analysis. Queries based on text or 
provided hash values.

If there is interest, I will create the issue and put up the patch.

Regards

Andy 




[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23034 - Still unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23034/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

67 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([E36E65A51D5C106]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([E36E65A51D5C106]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAd

[jira] [Commented] (SOLR-12019) Prepare Streaming Expressions for machine learning functions

2018-10-15 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650730#comment-16650730
 ] 

Joel Bernstein commented on SOLR-12019:
---

I'm very close to starting work the Smile integration. I'll be taking the same 
approach as with Apache Commons Math which is to add new functions with every 
release. So it will be a gradual ramp up. The big question is what to do first? 
I was thinking of adding some of the regression algorithms that are not in 
Apache Commons Math, such as Lasso and Ridge. Then adding classifiers (SVM, 
NaiveBayes etc...). I'm also interested in the plotting.

> Prepare Streaming Expressions for machine learning functions
> 
>
> Key: SOLR-12019
> URL: https://issues.apache.org/jira/browse/SOLR-12019
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> This ticket is to prepare the Streaming Expressions framework for the next 
> phase of development which will focus on *machine learning*.
> Because this next phase will involve a large number of new functions it will 
> be important to prepare the Streaming Expressions framework before getting 
> started.
> There are three main goals of the ticket:
> 1) Refactoring of code and test cases to prepare for the new machine learning 
> functions.
> 2) Improve the documentation of the current statistical functions and 
> refactor the docs so they can support the new machine learning functions.
> 3) Integrate the [http://haifengl.github.io/smile/] libraries. Now that the 
> *Apache Commons Math* integration is close to completion its time to start on 
> the *Smile* machine learning integration.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12874) Java 9+ GC Log files are being rotated every 20KB instead of every 20MB

2018-10-15 Thread Tim Underwood (JIRA)
Tim Underwood created SOLR-12874:


 Summary: Java 9+ GC Log files are being rotated every 20KB instead 
of every 20MB
 Key: SOLR-12874
 URL: https://issues.apache.org/jira/browse/SOLR-12874
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.5
Reporter: Tim Underwood


The Java 9+ GC logging options in bin/solr and bin/solr.cmd specify a log 
rotation file size of 2 which according to JEP 158 
([https://openjdk.java.net/jeps/158]) should be the "file size in kb" however 
when running Solr on Java 11 I'm seeing GC logs rotated every 20KB.

Changing "filesize=2" to "filesize=20M" fixes the problem for me under 
Linux.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #470: SOLR-12874 - Java 9+ GC Logging filesize para...

2018-10-15 Thread tpunder
GitHub user tpunder opened a pull request:

https://github.com/apache/lucene-solr/pull/470

SOLR-12874 - Java 9+ GC Logging filesize parameter should be 20M instead of 
2

JEP 158 (https://openjdk.java.net/jeps/158) says the filesize parameter is 
the “file size in kb” however that appears to not be the case since when it 
is set to a value of 2 you end up with GC logs that are only 2 bytes in 
length.  Setting the value to 20M produces the desired result of GC log files 
that are 20MB in size.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tpunder/lucene-solr 
java9plus_gc_logging_filesize

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/470.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #470


commit 5202a713baad4c7b743ad5ccd5db2bc079672d84
Author: Tim Underwood 
Date:   2018-10-15T19:52:12Z

Java 9+ GC Logging filesize parameter should be 20M instead of 2

JEP 158 (https://openjdk.java.net/jeps/158) says the filesize parameter is 
the “file size in kb” however that appears to not be the case since when it 
is set to a value of 2 you end up with GC logs that are only 2 bytes in 
length.  Setting the value to 20M produces the desired result of GC log files 
that are 20MB in size.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7570 - Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7570/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testPredicateFailureTimesOut

Error Message:
Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$128/818831685@40d6ed96
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@7eaaa7a9[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$128/818831685@40d6ed96
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@7eaaa7a9[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]
at 
__randomizedtesting.SeedInfo.seed([613251841DEE82CC:9C0B37B844F657B6]:0)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2080)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:832)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.waitFor(TestCollectionStateWatchers.java:86)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testPredicateFailureTimesOut(TestCollectionStateWatchers.java:220)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Statem

[jira] [Commented] (SOLR-12862) Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a vector of exponents

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650697#comment-16650697
 ] 

ASF subversion and git services commented on SOLR-12862:


Commit 7038e4c1b37af62c8c092addb2e90e1c932a3ce2 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7038e4c ]

SOLR-12862: Add log10 Stream Evaluator and allow the pow Stream Evaluator to 
accept a vector of exponents


> Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a 
> vector of exponents
> -
>
> Key: SOLR-12862
> URL: https://issues.apache.org/jira/browse/SOLR-12862
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12862.patch
>
>
> This ticket adds the *log10* Stream Evaluator to support base 10 log 
> transformations. It also adds support for passing a vector of exponents to 
> the *pow* Stream Evaluator to support reverse log transformations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12862) Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a vector of exponents

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650698#comment-16650698
 ] 

ASF subversion and git services commented on SOLR-12862:


Commit 653894fe2028107a090f44f9f46aa0bdf7d1424b in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=653894f ]

SOLR-12862: Fix TestLang


> Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a 
> vector of exponents
> -
>
> Key: SOLR-12862
> URL: https://issues.apache.org/jira/browse/SOLR-12862
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12862.patch
>
>
> This ticket adds the *log10* Stream Evaluator to support base 10 log 
> transformations. It also adds support for passing a vector of exponents to 
> the *pow* Stream Evaluator to support reverse log transformations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1153 - Still Failing

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1153/

No tests ran.

Build Log:
[...truncated 23269 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2431 links (1983 relative) to 3172 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

i

[jira] [Commented] (SOLR-12862) Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a vector of exponents

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650686#comment-16650686
 ] 

ASF subversion and git services commented on SOLR-12862:


Commit 73a413cd85ca03dae69250189b9c6ae24f42801c in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73a413c ]

SOLR-12862: Fix TestLang


> Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a 
> vector of exponents
> -
>
> Key: SOLR-12862
> URL: https://issues.apache.org/jira/browse/SOLR-12862
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12862.patch
>
>
> This ticket adds the *log10* Stream Evaluator to support base 10 log 
> transformations. It also adds support for passing a vector of exponents to 
> the *pow* Stream Evaluator to support reverse log transformations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12862) Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a vector of exponents

2018-10-15 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650685#comment-16650685
 ] 

ASF subversion and git services commented on SOLR-12862:


Commit 6c0fbe5a9d544060c42c4a1ec241a71c47d14bb8 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6c0fbe5 ]

SOLR-12862: Add log10 Stream Evaluator and allow the pow Stream Evaluator to 
accept a vector of exponents


> Add log10 Stream Evaluator and allow the pow Stream Evaluator to accept a 
> vector of exponents
> -
>
> Key: SOLR-12862
> URL: https://issues.apache.org/jira/browse/SOLR-12862
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12862.patch
>
>
> This ticket adds the *log10* Stream Evaluator to support base 10 log 
> transformations. It also adds support for passing a vector of exponents to 
> the *pow* Stream Evaluator to support reverse log transformations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12872) Deprecate split.key parameter in SPLITSHARD API and make it easier to use

2018-10-15 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-12872.
-
Resolution: Invalid

False alarm. There is some disconnect between what the documentation says and 
what the code does. I'll fix the documentation as part of SOLR-5004 instead of 
here and leave the split.key parameter as is.

> Deprecate split.key parameter in SPLITSHARD API and make it easier to use
> -
>
> Key: SOLR-12872
> URL: https://issues.apache.org/jira/browse/SOLR-12872
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
>
> While working on SOLR-5004, I realized how confusing the current SPLITSHARD 
> API can get. Here's the current set of options to split a shard:
>  # Specify split.key but not with shard name. Providing the shard name here 
> leads to an exception
>  # Specify ranges with shard name (actually the same as above) but requires 
> the shard name
>  # Not specify ranges OR split.key. This will split the specified shard into 
> 2 from the middle of the hash range.
> split.key is just a syntactic sugar on top of the shard + ranges combination. 
> Ideally, we can even figure out shard name from the ranges, but for the sake 
> of consistency it perhaps makes sense to make shard name mandatory.
> I propose that we deprecate split.key and only allow 2 options:
>  # shard name + ranges
>  # shard name + (optional numSubShards as part of SOLR-5004). The number of 
> sub-shards defaults to 2.
> The intention here is to simplify the API by providing fewer but more 
> consistent and intuitive options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12873) A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST

2018-10-15 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12873:
---
Attachment: SOLR-12873.patch

> A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST
> ---
>
> Key: SOLR-12873
> URL: https://issues.apache.org/jira/browse/SOLR-12873
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12873.patch
>
>
> There are a few config files still referring to {{LUCENE_CURRENT}} instead of 
> {{LATEST}}. This is to remove them, following on from LUCENE-5901 a while 
> back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12873) A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST

2018-10-15 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12873:
--

 Summary: A few solrconfig.xml still use LUCENE_CURRENT instead of 
LATEST
 Key: SOLR-12873
 URL: https://issues.apache.org/jira/browse/SOLR-12873
 Project: Solr
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke


There are a few config files still referring to {{LUCENE_CURRENT}} instead of 
{{LATEST}}. This is to remove them, following on from LUCENE-5901 a while back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12822) If a replicationFactor less than specified in collection attribute show suggestion to ADD-REPLICA

2018-10-15 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12822.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.6

Hi Noble,

I'm resolving this issue.

> If a replicationFactor less than specified in collection attribute show 
> suggestion to ADD-REPLICA
> -
>
> Key: SOLR-12822
> URL: https://issues.apache.org/jira/browse/SOLR-12822
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12872) Deprecate split.key parameter in SPLITSHARD API and make it easier to use

2018-10-15 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-12872:
---

 Summary: Deprecate split.key parameter in SPLITSHARD API and make 
it easier to use
 Key: SOLR-12872
 URL: https://issues.apache.org/jira/browse/SOLR-12872
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Anshum Gupta
Assignee: Anshum Gupta


While working on SOLR-5004, I realized how confusing the current SPLITSHARD API 
can get. Here's the current set of options to split a shard:
 # Specify split.key but not with shard name. Providing the shard name here 
leads to an exception
 # Specify ranges with shard name (actually the same as above) but requires the 
shard name
 # Not specify ranges OR split.key. This will split the specified shard into 2 
from the middle of the hash range.

split.key is just a syntactic sugar on top of the shard + ranges combination. 
Ideally, we can even figure out shard name from the ranges, but for the sake of 
consistency it perhaps makes sense to make shard name mandatory.

I propose that we deprecate split.key and only allow 2 options:
 # shard name + ranges
 # shard name + (optional numSubShards as part of SOLR-5004). The number of 
sub-shards defaults to 2.

The intention here is to simplify the API by providing fewer but more 
consistent and intuitive options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12871) sort=childfield(currency_field) desc fails with exception about REWRITABLE field type

2018-10-15 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-12871:
---

 Summary: sort=childfield(currency_field) desc fails with exception 
about REWRITABLE field type
 Key: SOLR-12871
 URL: https://issues.apache.org/jira/browse/SOLR-12871
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 6.6
Reporter: Mikhail Khludnev


When searching by bjq and sorting by matching child currency field like 
{{sort=childfield(currency_field) desc}} it fails with 
{code}
UnsupportedOperationException: Sort type REWRITEABLE is not supported
at 
org.apache.lucene.search.join.ToParentBlockJoinSortField.(ToParentBlockJoinSortField.java:65)
{code}
At least it's good to start documenting workaround. Btw why don't allow 
functions over children just by rewriting underneath sort field?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)

2018-10-15 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650618#comment-16650618
 ] 

Christine Poerschke commented on SOLR-12699:


Attached revised patch which incorporates the two things mentioned above (null 
remaining null and use of LinkedHashMap instead of HashMap). Also marked 
LTRScoringModel.calculateHashCode() final in response to 
TestWrapperModel.testMethodOverridesAndDelegation failing otherwise.

> make LTRScoringModel immutable (to allow hashCode caching)
> --
>
> Key: SOLR-12699
> URL: https://issues.apache.org/jira/browse/SOLR-12699
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch, 
> SOLR-12699.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... LTRScoringModel was a mutable object. It was leading to the 
> calculation of hashcode on each query, which in turn can consume a lot of 
> time ... So I decided to make LTRScoringModel immutable and cache hashCode 
> calculation. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12699) make LTRScoringModel immutable (to allow hashCode caching)

2018-10-15 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12699:
---
Attachment: SOLR-12699.patch

> make LTRScoringModel immutable (to allow hashCode caching)
> --
>
> Key: SOLR-12699
> URL: https://issues.apache.org/jira/browse/SOLR-12699
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Priority: Major
> Attachments: SOLR-12699.patch, SOLR-12699.patch, SOLR-12699.patch, 
> SOLR-12699.patch
>
>
> [~slivotov] wrote in SOLR-12688:
> bq. ... LTRScoringModel was a mutable object. It was leading to the 
> calculation of hashcode on each query, which in turn can consume a lot of 
> time ... So I decided to make LTRScoringModel immutable and cache hashCode 
> calculation. ...
> (Please see SOLR-12688 description for overall context and analysis results.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12870) Use StandardCharsets instead of String values

2018-10-15 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated SOLR-12870:
-
Attachment: SOLR-12870.master.1.patch

> Use StandardCharsets instead of String values
> -
>
> Key: SOLR-12870
> URL: https://issues.apache.org/jira/browse/SOLR-12870
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Somogyi
>Priority: Minor
> Attachments: SOLR-12870.master.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Charsets are hardcoded in some places around the codebase which is 
> error-prone.
> Moving to StandardCharsets also has the benefit of dropping the try-catch 
> block caused by UnsupportedEncodingException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #469: SOLR-12870: Use StandardCharsets instead of S...

2018-10-15 Thread petersomogyi
GitHub user petersomogyi opened a pull request:

https://github.com/apache/lucene-solr/pull/469

SOLR-12870: Use StandardCharsets instead of String values



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/petersomogyi/lucene-solr SOLR-12870

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/469.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #469


commit 6401befa4322438b183a8b314800ebad7d6a7556
Author: Peter Somogyi 
Date:   2018-10-15T18:42:27Z

SOLR-12870: Use StandardCharsets instead of String values




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12870) Use StandardCharsets instead of String values

2018-10-15 Thread Peter Somogyi (JIRA)
Peter Somogyi created SOLR-12870:


 Summary: Use StandardCharsets instead of String values
 Key: SOLR-12870
 URL: https://issues.apache.org/jira/browse/SOLR-12870
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Peter Somogyi


Charsets are hardcoded in some places around the codebase which is error-prone.

Moving to StandardCharsets also has the benefit of dropping the try-catch block 
caused by UnsupportedEncodingException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12869) Unit test stalling

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650581#comment-16650581
 ] 

Shawn Heisey commented on SOLR-12869:
-

The big problem with Guava and Solr isn't Solr itself -- it's Hadoop.  Solr 
includes hadoop dependencies so that it can store indexes in HDFS.  See 
SOLR-11763.

> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
> Attachments: solr-release.diff
>
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: guava upgrade from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file (sole-release.diff) is attached 
> with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2918 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2918/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

11 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitForNonexistantCollection

Error Message:
Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$129/1879245192@6f7da80f
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@b420afb[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$129/1879245192@6f7da80f
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@b420afb[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]
at 
__randomizedtesting.SeedInfo.seed([322F9941C160A454:990F02F7AC3E607F]:0)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2080)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:832)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.waitInBackground(TestCollectionStateWatchers.java:74)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitForNonexistantCollection(TestCollectionStateWatchers.java:203)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.eval

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 862 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/862/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testDeletionsTriggerWatches

Error Message:
Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/2118000478@72a14bb6
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@247fb36f[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$9/2118000478@72a14bb6
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@247fb36f[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8]
at 
__randomizedtesting.SeedInfo.seed([42BAAE1B2A4422C3:E07B6694A827F4AE]:0)
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.waitInBackground(TestCollectionStateWatchers.java:74)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testDeletionsTriggerWatches(TestCollectionStateWatchers.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementA

[jira] [Commented] (SOLR-11763) Upgrade Guava to 23.0

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650534#comment-16650534
 ] 

Shawn Heisey commented on SOLR-11763:
-

bq. Indeed and because of this we can't get rid of the dependency and move 
Solr's usage of Guava to Java8 

Even though we can't remove the dependency because it's required by other 
dependencies, I don't see any reason not to switch from Guava methods to native 
Java methods in code that we control.


> Upgrade Guava to 23.0
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Urgent Help required to configure SOLR with HTTPS (TLS1.2)

2018-10-15 Thread Amit Kumar266
 Hi Team,I have one issue, where I'm looking for help.I am able to configure SOLR HTTPS (TLS1.2) on my local windows machine.But
 When I try to setup same on my Unix Server , I get HTTPS (with TLS1.0) 
only. The only difference is there I need to get root CA added for this 
Unix box to get it recognized on browsers. Could you please help me to know, if any specific settings need to be done in SOLR related files to make it TLS1.2 ?A quick response would be highly appreciated. Thank you Thanks and regards,Amit 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12869) Unit test stalling

2018-10-15 Thread Vishal (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal updated SOLR-12869:
--
Description: 
When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for unit 
test run with guava 25.0-jre:

13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
HdfsNNFailoverTest (suite)

13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
HdfsNNFailoverTest (suite)

13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
HdfsNNFailoverTest (suite)

13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
HdfsNNFailoverTest (suite)

13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
HdfsNNFailoverTest (suite)

13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
HdfsNNFailoverTest (suite)



Note: guava upgrade from default version 14.0.1 to 25.0-jre or 26.0-jre 
requires solr code changes. The diff file (sole-release.diff) is attached with 
this bug.

  was:
When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for unit 
test run with guava 25.0-jre:

13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
HdfsNNFailoverTest (suite)

13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
HdfsNNFailoverTest (suite)

13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
HdfsNNFailoverTest (suite)

13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
HdfsNNFailoverTest (suite)

13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
HdfsNNFailoverTest (suite)

13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
HdfsNNFailoverTest (suite)



Note: to upgrade guava from default version 14.0.1 to 25.0-jre or 26.0-jre 
requires solr code changes. The diff file is attached with this bug.


> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
> Attachments: solr-release.diff
>
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: guava upgrade from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file (sole-release.diff) is attached 
> with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional com

[jira] [Updated] (SOLR-12869) Unit test stalling

2018-10-15 Thread Vishal (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal updated SOLR-12869:
--
Attachment: solr-release.diff

> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
> Attachments: solr-release.diff
>
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: to upgrade guava from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file is attached with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12869) Unit test stalling

2018-10-15 Thread Vishal (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal updated SOLR-12869:
--
Description: 
When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for unit 
test run with guava 25.0-jre:

13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
HdfsNNFailoverTest (suite)

13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
HdfsNNFailoverTest (suite)

13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
HdfsNNFailoverTest (suite)

13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
HdfsNNFailoverTest (suite)

13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
HdfsNNFailoverTest (suite)

13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
HdfsNNFailoverTest (suite)



Note: to upgrade guava from default version 14.0.1 to 25.0-jre or 26.0-jre 
requires solr code changes. The diff file is attached with this bug.

  was:
When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for unit 
test run with guava 25.0-jre:

13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
HdfsNNFailoverTest (suite)

13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
HdfsNNFailoverTest (suite)

13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
HdfsNNFailoverTest (suite)

13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
HdfsNNFailoverTest (suite)

13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
HdfsNNFailoverTest (suite)

13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
HdfsNNFailoverTest (suite)


> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: to upgrade guava from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file is attached with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1703 - Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1703/

[...truncated 32 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/953/consoleText

[repro] Revision: 05ec77ed0933982e853511ed633583dd4ec2d71c

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=raceConditionOnDeleteAndRegisterReplica 
-Dtests.seed=4762962B3CEA2388 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sr -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteReplicaByCountForAllShards -Dtests.seed=4762962B3CEA2388 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
-Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteLiveReplicaTest -Dtests.seed=4762962B3CEA2388 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
-Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testSimpleCollectionWatch -Dtests.seed=1AFE25120AD1622 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-AR 
-Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d7fd82c0f8517251d67b0af021d259dffaa4dce6
[repro] git fetch
[repro] git checkout 05ec77ed0933982e853511ed633583dd4ec2d71c

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   DeleteReplicaTest
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 3437 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.DeleteReplicaTest" -Dtests.showOutput=onerror  
-Dtests.seed=4762962B3CEA2388 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sr -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 21795 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=1AFE25120AD1622 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-AR -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 281 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.cloud.DeleteReplicaTest
[repro]   3/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout d7fd82c0f8517251d67b0af021d259dffaa4dce6

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23033 - Failure!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23033/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth

Error Message:
Could not find collection:second_collection

Stack Trace:
java.lang.AssertionError: Could not find collection:second_collection
at 
__randomizedtesting.SeedInfo.seed([496A08EB5629BB13:B5D0DCDFAE090AD9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:263)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:249)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:157)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth(TestMiniSolrCloudClusterSSL.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgn

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 187 - Still Unstable

2018-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/187/

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.prometheus.collector.SolrCollectorTest

Error Message:
Error from server at http://127.0.0.1:44623/solr: Error getting replica 
locations : unable to get autoscaling policy session

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:44623/solr: Error getting replica locations : 
unable to get autoscaling policy session
at __randomizedtesting.SeedInfo.seed([471B8C2321BCEE5F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.prometheus.exporter.SolrExporterTestBase.setupCluster(SolrExporterTestBase.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.prometheus.collector.SolrCollectorTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [InternalHttpClient, 
SolrZkClient, SolrZkClient, InternalHttpClient, InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:215)
  at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:47)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:539)  at 
org.apache.solr.

[jira] [Updated] (SOLR-12869) Unit test stalling

2018-10-15 Thread Vishal (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal updated SOLR-12869:
--
Description: 
When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for unit 
test run with guava 25.0-jre:

13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
HdfsNNFailoverTest (suite)

13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
HdfsNNFailoverTest (suite)

13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
HdfsNNFailoverTest (suite)

13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
HdfsNNFailoverTest (suite)

13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
HdfsNNFailoverTest (suite)

13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
HdfsNNFailoverTest (suite)

  was:
When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

Attaching logs for unit test run with guava 25.0-jre.


> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12869) Unit test stalling

2018-10-15 Thread Vishal (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal updated SOLR-12869:
--
Attachment: (was: solr_test_guava25.txt)

> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> Attaching logs for unit test run with guava 25.0-jre.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12869) Unit test stalling

2018-10-15 Thread Vishal (JIRA)
Vishal created SOLR-12869:
-

 Summary: Unit test stalling
 Key: SOLR-12869
 URL: https://issues.apache.org/jira/browse/SOLR-12869
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Affects Versions: 7.4
Reporter: Vishal
 Attachments: solr_test_guava25.txt

When guava dependency is upgraded from 14.0.1 to the latest version 
(26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
finishes up.

Attaching logs for unit test run with guava 25.0-jre.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-15 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5004:
---
Attachment: (was: SOLR-5004.03.patch)

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, SOLR-5004.patch, 
> SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-15 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5004:
---
Attachment: SOLR-5004.02.patch

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, SOLR-5004.patch, 
> SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-15 Thread Anshum Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650439#comment-16650439
 ] 

Anshum Gupta commented on SOLR-5004:


[~varunthacker] - I think numShards is what I think of at a collection level. I 
also thought about 'numSplits' but the confusion there is that numSplits=1 will 
lead to 2 sub-shards, i.e. numSplits + 1 sub-shards, confusing the users. 

The ref guide change is certainly going to follow once this is committed.

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2109 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2109/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:38229_solr, 
127.0.0.1:41775_solr, 127.0.0.1:50677_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/11)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_true_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:54545/solr";,   
"node_name":"127.0.0.1:54545_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"http://127.0.0.1:54545/solr";,   
"node_name":"127.0.0.1:54545_solr",   "state":"down",   
"type":"NRT"}, "core_node4":{   
"core":"raceDeleteReplica_true_shard1_replica_n2",   
"base_url":"http://127.0.0.1:50677/solr";,   
"node_name":"127.0.0.1:50677_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:38229_solr, 127.0.0.1:41775_solr, 127.0.0.1:50677_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/11)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"http://127.0.0.1:54545/solr";,
  "node_name":"127.0.0.1:54545_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"http://127.0.0.1:54545/solr";,
  "node_name":"127.0.0.1:54545_solr",
  "state":"down",
  "type":"NRT"},
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"http://127.0.0.1:50677/solr";,
  "node_name":"127.0.0.1:50677_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([3B0AF85CC66ED97E:511C998CAE9C93B4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrots

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2917 - Still Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2917/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseParallelGC

10 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWatcherIsRemovedAfterTimeout

Error Message:
Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$10/32437646@845a2e
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@85fa28[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$10/32437646@845a2e
 rejected from 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@85fa28[Terminated,
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8]
at 
__randomizedtesting.SeedInfo.seed([4E579089693FB36C:6FDDB5B9B7CF9A56]:0)
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:194)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.waitFor(TestCollectionStateWatchers.java:86)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWatcherIsRemovedAfterTimeout(TestCollectionStateWatchers.java:270)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at

[jira] [Commented] (SOLR-11970) Deprecate maxShardsPerNode while creating collections

2018-10-15 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650312#comment-16650312
 ] 

Varun Thacker commented on SOLR-11970:
--

[~shalinmangar] after the recent changes going into 7.6 ( SOLR-12739 and linked 
Jiras ) this issue can perhaps be closed as invalid?

> Deprecate maxShardsPerNode while creating collections
> -
>
> Key: SOLR-11970
> URL: https://issues.apache.org/jira/browse/SOLR-11970
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Today maxShardsPerNode helps users ensure multiple replicas of the same shard 
> don't get assigned to the same node.
> Starting 7.0 , the policy framework can express the same constraint.
> Both can conflict today.
> If a user creates a collection with maxShardsPerNode=1 here's the equivalent 
> of it in the policy framework.
> {code}
> {"replica": "<2", "shard": "#EACH", "node": "#ANY"}
> {code}
> http://lucene.apache.org/solr/guide/solrcloud-autoscaling-policy-preferences.html#limit-replica-placement
> We should also change the default of maxShardsPerNode from 1 to -1 so that it 
> doesn't fail commands when a user doesn't specify this parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12867) Async request status: Not getting all status messages because a response from one node will overwrite previous responses from that node

2018-10-15 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-12867.
--
Resolution: Duplicate

> Async request status: Not getting all status messages because a response from 
> one node will overwrite previous responses from that node
> ---
>
> Key: SOLR-12867
> URL: https://issues.apache.org/jira/browse/SOLR-12867
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.
> Not all of the responses from different nodes in the collection are being 
> reported.  According to [~shalinmangar], this is because multiple responses 
> from a node are overwriting earlier responses from that node.
> Steps to reproduce:
>  * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download. 
>  Tell it that you want 3 nodes, accept defaults for all other questions.
> * Create a collection with 30 shards:
> ** bin\solr create -c test2 -shards 30 -replicationFactor 2
> * Start an async backup of the collection.  On a Windows system, the URL 
> might look like this:
> ** 
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
>  * After a few seconds (to give the backup time to complete), request the 
> status of the async operation:
>  ** 
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag
> The response will only contain individual statuses for 3 of the 30 shards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12291) OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on each node

2018-10-15 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650306#comment-16650306
 ] 

Varun Thacker commented on SOLR-12291:
--

The affected APIs by this bug is : BACKUPs , RELOAD collection and RESTORE 
collection

> OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on 
> each node
> --
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12867) Async request status: Not getting all status messages because a response from one node will overwrite previous responses from that node

2018-10-15 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650300#comment-16650300
 ] 

Shawn Heisey commented on SOLR-12867:
-

[~varunthacker], sounds like the correct thing to do is close this as a 
duplicate and add a note to SOLR-12291 saying that BACKUP is also affected.

> Async request status: Not getting all status messages because a response from 
> one node will overwrite previous responses from that node
> ---
>
> Key: SOLR-12867
> URL: https://issues.apache.org/jira/browse/SOLR-12867
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.
> Not all of the responses from different nodes in the collection are being 
> reported.  According to [~shalinmangar], this is because multiple responses 
> from a node are overwriting earlier responses from that node.
> Steps to reproduce:
>  * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download. 
>  Tell it that you want 3 nodes, accept defaults for all other questions.
> * Create a collection with 30 shards:
> ** bin\solr create -c test2 -shards 30 -replicationFactor 2
> * Start an async backup of the collection.  On a Windows system, the URL 
> might look like this:
> ** 
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
>  * After a few seconds (to give the backup time to complete), request the 
> status of the async operation:
>  ** 
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag
> The response will only contain individual statuses for 3 of the 30 shards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12867) Async request status: Not getting all status messages because a response from one node will overwrite previous responses from that node

2018-10-15 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650291#comment-16650291
 ] 

Varun Thacker commented on SOLR-12867:
--

Hi Shawn,

Unfortunately this has been a known bug for a while :/ I had created SOLR-12291 
to track the work but neer got around to committing it.

> Async request status: Not getting all status messages because a response from 
> one node will overwrite previous responses from that node
> ---
>
> Key: SOLR-12867
> URL: https://issues.apache.org/jira/browse/SOLR-12867
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.
> Not all of the responses from different nodes in the collection are being 
> reported.  According to [~shalinmangar], this is because multiple responses 
> from a node are overwriting earlier responses from that node.
> Steps to reproduce:
>  * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download. 
>  Tell it that you want 3 nodes, accept defaults for all other questions.
> * Create a collection with 30 shards:
> ** bin\solr create -c test2 -shards 30 -replicationFactor 2
> * Start an async backup of the collection.  On a Windows system, the URL 
> might look like this:
> ** 
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
>  * After a few seconds (to give the backup time to complete), request the 
> status of the async operation:
>  ** 
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag
> The response will only contain individual statuses for 3 of the 30 shards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 885 - Unstable!

2018-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/885/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

15 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:53663_solr, 
127.0.0.1:53664_solr, 127.0.0.1:53665_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"http://127.0.0.1:53666/solr";,   
"node_name":"127.0.0.1:53666_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"http://127.0.0.1:53666/solr";,   
"node_name":"127.0.0.1:53666_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:53663_solr, 127.0.0.1:53664_solr, 127.0.0.1:53665_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"http://127.0.0.1:53666/solr";,
  "node_name":"127.0.0.1:53666_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"http://127.0.0.1:53666/solr";,
  "node_name":"127.0.0.1:53666_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([FB7AEEDD7ED292EB:916C8F0D1620D821]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.rando

[jira] [Updated] (SOLR-12867) Async request status: Not getting all status messages because a response from one node will overwrite previous responses from that node

2018-10-15 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12867:

Description: 
Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.

Not all of the responses from different nodes in the collection are being 
reported.  According to [~shalinmangar], this is because multiple responses 
from a node are overwriting earlier responses from that node.

Steps to reproduce:

 * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download.  
Tell it that you want 3 nodes, accept defaults for all other questions.
* Create a collection with 30 shards:
** bin\solr create -c test2 -shards 30 -replicationFactor 2
* Start an async backup of the collection.  On a Windows system, the URL might 
look like this:
** 
http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
 * After a few seconds (to give the backup time to complete), request the 
status of the async operation:
 ** 
http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag

The response will only contain individual statuses for 3 of the 30 shards.


  was:
Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.

Not all of the responses from different nodes in the collection are being 
reported.  According to [~shalinmangar], this is because multiple responses 
from a node are overwriting earlier responses from that node.



> Async request status: Not getting all status messages because a response from 
> one node will overwrite previous responses from that node
> ---
>
> Key: SOLR-12867
> URL: https://issues.apache.org/jira/browse/SOLR-12867
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Problem noticed with REQUESTSTATUS on an async collections API BACKUP call.
> Not all of the responses from different nodes in the collection are being 
> reported.  According to [~shalinmangar], this is because multiple responses 
> from a node are overwriting earlier responses from that node.
> Steps to reproduce:
>  * Start a cloud example with "bin/solr -e cloud" in a 7.5.0 binary download. 
>  Tell it that you want 3 nodes, accept defaults for all other questions.
> * Create a collection with 30 shards:
> ** bin\solr create -c test2 -shards 30 -replicationFactor 2
> * Start an async backup of the collection.  On a Windows system, the URL 
> might look like this:
> ** 
> http://localhost:8983/solr/admin/collections?action=BACKUP&name=test2&collection=test2&location=C%3A%5CUsers%5Celyograg%5CDownloads%5Csolrbackups&async=sometag
>  * After a few seconds (to give the backup time to complete), request the 
> status of the async operation:
>  ** 
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=sometag
> The response will only contain individual statuses for 3 of the 30 shards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10465) setIdField should be deprecated in favor of SolrClientBuilder methods

2018-10-15 Thread Charles Sanders (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16650221#comment-16650221
 ] 

Charles Sanders commented on SOLR-10465:


Patch adds method withIdField to the builder.  Deprecates setIdField method.

> setIdField should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10465
> URL: https://issues.apache.org/jira/browse/SOLR-10465
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10465.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setIdField}} setter 
> on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10465) setIdField should be deprecated in favor of SolrClientBuilder methods

2018-10-15 Thread Charles Sanders (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Sanders updated SOLR-10465:
---
Attachment: SOLR-10465.patch

> setIdField should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10465
> URL: https://issues.apache.org/jira/browse/SOLR-10465
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10465.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setIdField}} setter 
> on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >