[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 1047 - Unstable!

2019-02-02 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/1047/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testDistribFaceting

Error Message:
Field intField should have a count of 1 expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Field intField should have a count of 1 expected:<1> 
but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9BA870059FC5BB48:F9198E38FBDE909A]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testFacet(DocValuesNotIndexedTest.java:450)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testDistribFaceting(DocValuesNotIndexedTest.java:213)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1946 lines...]
   [junit4] JVM J0: stderr was

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 442 - Failure

2019-02-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/442/

No tests ran.

Build Log:
[...truncated 23461 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2460 links (2011 relative) to 3224 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.8.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings ::

[jira] [Updated] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-02 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12330:

Attachment: SOLR-12330.patch

> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch, 
> SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
>  {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> -It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
> just silently ignored. Turns out it's ok see SOLR-9682



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13131) Category Routed Aliases

2019-02-02 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759287#comment-16759287
 ] 

David Smiley commented on SOLR-13131:
-

Okay.

> Category Routed Aliases
> ---
>
> Key: SOLR-13131
> URL: https://issues.apache.org/jira/browse/SOLR-13131
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: indexingWithCRA.png, indexingwithoutCRA.png, 
> indexintWithoutCRA2.png
>
>
> This ticket is to add a second type of routed alias in addition to the 
> current time routed aliases. The new type of alias will allow data driven 
> creation of collections based on the values of a field and automated 
> organization of these collections under an alias that allows the collections 
> to also be searched as a whole.
> The use case in mind at present is an IOT device type segregation, but I 
> could also see this leading to the ability to direct updates to tenant 
> specific hardware (in cooperation with autoscaling). 
> This ticket also looks forward to (but does not include) the creation of a 
> Dimensionally Routed Alias which would allow organizing time routed data also 
> segregated by device
> Further design details to be added in comments.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-02 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759286#comment-16759286
 ] 

Munendra S N commented on SOLR-12330:
-

 [^SOLR-12330.patch] 
[~mkhludnev]
I have removed the error condition for the 3rd case.
Also handled cases which can lead to 
* ClassCastException
* NullPointerException

 [^SOLR-12330-combined.patch] 
This one combines both SOLR-12330 and SOLR-13174

Let me know if there are any other NPE & CCE cases which needs to be handled in 
FacetModule

> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330-combined.patch, SOLR-12330.patch, 
> SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
>  {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> -It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
> just silently ignored. Turns out it's ok see SOLR-9682



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-02 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12330:

Attachment: SOLR-12330-combined.patch

> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330-combined.patch, SOLR-12330.patch, 
> SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
>  {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> -It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
> just silently ignored. Turns out it's ok see SOLR-9682



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8598) improve error handling in JSON Facet API

2019-02-02 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759285#comment-16759285
 ] 

Munendra S N commented on SOLR-8598:


[~ysee...@gmail.com]
I think this has been already merged. Can we close this??

> improve error handling in JSON Facet API
> 
>
> Key: SOLR-8598
> URL: https://issues.apache.org/jira/browse/SOLR-8598
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-8598.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11763:

Fix Version/s: master (9.0)
   8.x

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253282735
 
 

 ##
 File path: lucene/ivy-versions.properties
 ##
 @@ -116,7 +117,7 @@ org.apache.calcite.version = 1.13.0
 /org.apache.commons/commons-math3 = 3.6.1
 /org.apache.commons/commons-text = 1.4
 
-org.apache.curator.version = 2.8.0
+org.apache.curator.version = 2.13.0
 
 Review comment:
   This version shades guava - don't end up with Guava issues down the line and 
matches the Hadoop 3.2.0 curator version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253282727
 
 

 ##
 File path: lucene/ivy-versions.properties
 ##
 @@ -24,14 +24,15 @@ com.fasterxml.jackson.core.version = 2.9.6
 /com.github.ben-manes.caffeine/caffeine = 2.4.0
 /com.github.virtuald/curvesapi = 1.04
 
-/com.google.guava/guava = 14.0.1
+/com.google.guava/guava = 25.1-jre
 /com.google.protobuf/protobuf-java = 3.6.1
 /com.google.re2j/re2j = 1.2
 /com.googlecode.juniversalchardet/juniversalchardet = 1.0.3
 /com.googlecode.mp4parser/isoparser = 1.1.22
 /com.healthmarketscience.jackcess/jackcess = 2.1.12
 /com.healthmarketscience.jackcess/jackcess-encrypt = 2.1.4
 /com.ibm.icu/icu4j = 62.1
+/com.jayway.jsonpath/json-path = 2.4.0
 
 Review comment:
   Needed by calcite


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253282730
 
 

 ##
 File path: lucene/ivy-versions.properties
 ##
 @@ -222,7 +223,7 @@ org.carrot2.morfologik.version = 2.1.5
 
 /org.ccil.cowan.tagsoup/tagsoup = 1.2.1
 
-org.codehaus.janino.version = 2.7.6
+org.codehaus.janino.version = 3.0.9
 
 Review comment:
   needed by calcite


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11763:

Attachment: SOLR-11763.patch

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759241#comment-16759241
 ] 

Kevin Risden commented on SOLR-11763:
-

Latest patch upgrades curator and adds license/notice for jayway json-path 
required by Calcite.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch, SOLR-11763.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759239#comment-16759239
 ] 

Kevin Risden commented on SOLR-11763:
-

Curator uses Guava and in the version we are using it is not shaded. Curator 
ended up shading guava to shield users of Curator from dealing with issues. 
Need to upgrade curator to take advantage of this. Will upgrade Curator to 
2.13.0 which is what Hadoop 3.2 is using as well.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-8.x #15: POMs out of sync

2019-02-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-8.x/15/

No tests ran.

Build Log:
[...truncated 32750 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-8.x/build.xml:679: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 26 minutes 35 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-8.x - Build # 24 - Unstable

2019-02-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/24/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate

Error Message:
The trigger did not start in time

Stack Trace:
java.lang.AssertionError: The trigger did not start in time
at 
__randomizedtesting.SeedInfo.seed([E6356756EB548D64:BB7D79DF24922B2B]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate(TestSimTriggerIntegration.java:1369)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14782 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J1/temp/solr.cloud.autoscaling.sim.TestSimTriggerIntegration_E6

[jira] [Updated] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11763:

Attachment: SOLR-11763.patch

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253279691
 
 

 ##
 File path: solr/core/ivy.xml
 ##
 @@ -133,6 +133,7 @@
 
 
 
+
 
 Review comment:
   Need to address precommit failures from this. Check license/notice. Fixed 
initial test failures about class not found though. Will get to later tonight.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11763:

Attachment: (was: SOLR-11763.patch)

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253279561
 
 

 ##
 File path: solr/core/ivy.xml
 ##
 @@ -133,6 +133,7 @@
 
 
 
+
 
 Review comment:
   Needed for Calcite upgrade


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on issue #558: SOLR-11763: Upgrade Guava to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#issuecomment-460001097
 
 
   This compiles for both source and tests. I will setup some test runs like I 
did for Hadoop 3 upgrade to make sure things look good later today.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253279179
 
 

 ##
 File path: lucene/ivy-versions.properties
 ##
 @@ -24,7 +24,7 @@ com.fasterxml.jackson.core.version = 2.9.6
 /com.github.ben-manes.caffeine/caffeine = 2.4.0
 /com.github.virtuald/curvesapi = 1.04
 
-/com.google.guava/guava = 14.0.1
+/com.google.guava/guava = 25.1-jre
 
 Review comment:
   Based on experience in KNOX-1611 found that 26.0-jre+ broke backwards 
compatibility for things Hadoop in integration tests required.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253279211
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/cloud/api/collections/TimeRoutedAlias.java
 ##
 @@ -36,7 +36,7 @@
 import java.util.function.Predicate;
 import java.util.function.Supplier;
 
-import com.google.common.base.Objects;
+import com.google.common.base.MoreObjects;
 
 Review comment:
   Guava changed `Objects` -> `MoreObjects`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk commented on a change in pull request #558: SOLR-11763: Upgrade Guava 
to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558#discussion_r253279205
 
 

 ##
 File path: lucene/ivy-versions.properties
 ##
 @@ -101,10 +101,10 @@ net.thisptr.version = 0.0.8
 
 /org.apache.ant/ant = 1.8.2
 
-org.apache.calcite.avatica.version = 1.10.0
+org.apache.calcite.avatica.version = 1.13.0
 /org.apache.calcite.avatica/avatica-core = 
${org.apache.calcite.avatica.version}
 
-org.apache.calcite.version = 1.13.0
+org.apache.calcite.version = 1.18.0
 
 Review comment:
   Calcite uses Guava and gives ability to use latest version here. Calcite 
1.16.0 requires later than Guava 19. `calcite.avatica.version` and 
`janino.version` are upgraded accordingly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759209#comment-16759209
 ] 

Kevin Risden commented on SOLR-11763:
-

PR is based on apache branch jira/solr-11763. I have checked that everything 
compiles so far. I have not checked that tests pass. I'll setup some test runs 
on this later today to see how things are looking.

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-11763:
---

Assignee: Kevin Risden  (was: Varun Thacker)

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch, 
> SOLR-11763.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk opened a new pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre

2019-02-02 Thread GitBox
risdenk opened a new pull request #558: SOLR-11763: Upgrade Guava to 25.1-jre
URL: https://github.com/apache/lucene-solr/pull/558
 
 
   Upgrades Guava and related dependencies like Calcite.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759202#comment-16759202
 ] 

Kevin Risden edited comment on SOLR-11763 at 2/2/19 9:05 PM:
-

In the Apache Knox project we have integration tests with Hadoop (just like 
Solr basically) and we were able to upgrade only to 25.1-jre for the Hadoop 
3.2.0 integration tests. 26.0 had some backwards incompatible changes. Details 
in KNOX-1611


was (Author: risdenk):
In the Apache Knox project we have integration tests with Hadoop (just like 
Solr basically) and we were able to upgrade only to 25.1-jre for the Hadoop 
integration tests. 26.0 had some backwards incompatible changes. Details in 
KNOX-1611

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11260) Update Guava to 23.0

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11260:

Fix Version/s: (was: 8.0)

> Update Guava to 23.0
> 
>
> Key: SOLR-11260
> URL: https://issues.apache.org/jira/browse/SOLR-11260
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Ahmet Arslan
>Assignee: Ahmet Arslan
>Priority: Minor
> Attachments: SOLR-11260.patch, SOLR-11260.patch
>
>
> Solr 6.6.0 depends on a pretty old version of guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759202#comment-16759202
 ] 

Kevin Risden commented on SOLR-11763:
-

In the Apache Knox project we have integration tests with Hadoop (just like 
Solr basically) and we were able to upgrade only to 25.1-jre for the Hadoop 
integration tests. 26.0 had some backwards incompatible changes. Details in 
KNOX-1611

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11763) Upgrade Guava to 25.1-jre

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11763:

Summary: Upgrade Guava to 25.1-jre  (was: Upgrade Guava to 23.0)

> Upgrade Guava to 25.1-jre
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-5007.

Resolution: Fixed

Guessing this was either fixed or needs to be looked at again cleanly on master.

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759193#comment-16759193
 ] 

Kevin Risden commented on SOLR-11473:
-

SOLR-7301 would also benefit from the approach here instead of hardcoding each 
prefix.

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Priority: Minor
> Attachments: SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11260) Update Guava to 23.0

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11260:

Fix Version/s: (was: SOLR-11763)

> Update Guava to 23.0
> 
>
> Key: SOLR-11260
> URL: https://issues.apache.org/jira/browse/SOLR-11260
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Ahmet Arslan
>Priority: Minor
> Attachments: SOLR-11260.patch, SOLR-11260.patch
>
>
> Solr 6.6.0 depends on a pretty old version of guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5780) Solr should benefit from Guava 16.0.1

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-5780.

Resolution: Duplicate

Marking as duplicate of SOLR-11763 since that is for a more recent version of 
Guava and there is recent conversation there.

> Solr should benefit from Guava 16.0.1
> -
>
> Key: SOLR-5780
> URL: https://issues.apache.org/jira/browse/SOLR-5780
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
> Environment: All.
>Reporter: Guido Medina
>Priority: Major
> Attachments: SOLR-5780.patch
>
>
> Solr is using concurrentlinkedhashmap v1.2 and Guava 14.0.1 at the same time, 
> according to concurrentlinkedhashmap author(s), that project main objective 
> is to introduce ideas and then when proven they are ported to Guava.
> concurrentlinkedhashmap v1.2 was designed for Java 5 and v1.4 for Java 6+ 
> which is the target version Solr 4.x requires, v1.4 had a great improvement 
> in performance and memory impact compared to v1.2 which was ported to Guava 
> (I strongly believe v16.0.1+ will do)
> *Pertinent material:*
> * 
> [http://stackoverflow.com/questions/15299554/what-does-it-mean-that-concurrentlinkedhashmap-has-been-integrated-into-guava]
> * [https://code.google.com/p/concurrentlinkedhashmap/wiki/Changelog]
> All that said, concurrentlinkedhashmap should be eliminated _- OR keep up to 
> date because it is the core of in-memory cache, same as Guava -_ and code 
> using it should instead use MapMaker builder from Guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-5007:
---
Component/s: Tests
 hdfs
 Hadoop Integration

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759196#comment-16759196
 ] 

Kevin Risden commented on SOLR-5007:


Its been 3+ years since last comment. Not sure if these threads are still 
leaking. Hadoop is now upgraded to Hadoop 3 from SOLR-9515. SOLR-7289 may have 
also fixed this issue or at least limited it to known failing threads.

Planning to close this since not much going on here to work from.

> TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
> failing a completely different test.
> 
>
> Key: SOLR-5007
> URL: https://issues.apache.org/jira/browse/SOLR-5007
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7301) HdfsDirectoryFactory does not support maprfs

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759192#comment-16759192
 ] 

Kevin Risden commented on SOLR-7301:


This would be solved better with the impl in SOLR-11473

> HdfsDirectoryFactory does not support maprfs
> 
>
> Key: SOLR-7301
> URL: https://issues.apache.org/jira/browse/SOLR-7301
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Reporter: Shenghua Wan
>Priority: Minor
> Attachments: fix_support_maprfs.patch
>
>
> when map-reduce index generator was run, an exception was thrown as
> 2015-03-24 11:06:04,569 WARN org.apache.hadoop.mapred.Child: Error running 
> child
> java.lang.IllegalStateException: Failed to initialize record writer for 
> MapReduceSolrIndex, attempt_201503171620_12558_r_00_0
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.(SolrRecordWriter.java:127)
>   at 
> org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:164)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:605)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:456)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>   at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [core1]
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:507)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:172)
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.(SolrRecordWriter.java:120)
>   ... 8 more
> Caused by: org.apache.solr.common.SolrException: You must set the 
> HdfsDirectoryFactory param solr.hdfs.home for relative dataDir paths to work
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.getDataHome(HdfsDirectoryFactory.java:271)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:699)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:646)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
>   ... 11 more
> after investigation, I found the class HdfsDirectoryFactory hardcoded 
> "hdfs:/". 
> a patch is provided in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7215) non reproducible Suite failures due to excessive sysout due to HDFS lease renewal WARN logs due to connection refused -- even if test doesn't use HDFS (ie: threads leakin

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-7215.

Resolution: Fixed

Resolving as most likely fixed somewhere along the way since its been 3 years 
since last comment.

> non reproducible Suite failures due to excessive sysout due to HDFS lease 
> renewal WARN logs due to connection refused -- even if test doesn't use HDFS 
> (ie: threads leaking between tests)
> --
>
> Key: SOLR-7215
> URL: https://issues.apache.org/jira/browse/SOLR-7215
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: tests-report.txt_suite-failure-due-to-sysout.txt.zip
>
>
> On my local machine, i've noticed lately a lot of sporadic, non reproducible, 
> failures like these...
> {noformat}
>   2> NOTE: reproduce with: ant test  -Dtestcase=ScriptEngineTest 
> -Dtests.seed=E254A7E69EC7212A -Dtests.slow=true -Dtests.locale=sv 
> -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> [14:34:23.749] ERROR   0.00s J1 | ScriptEngineTest (suite) <<<
>> Throwable #1: java.lang.AssertionError: The test or suite printed 10984 
> bytes to stdout and stderr, even though the limit was set to 8192 bytes. 
> Increase the limit with @Limit, ignore it completely with 
> @SuppressSysoutChecks or run with -Dtests.verbose=true
>>  at __randomizedtesting.SeedInfo.seed([E254A7E69EC7212A]:0)
>>  at 
> org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
> {noformat}
> Invariably, looking at the logs of test that fail for this reason, i see 
> multiple instances of these WARN msgs...
> {noformat}
>   2> 601361 T3064 oahh.LeaseRenewer.run WARN Failed to renew lease for 
> [DFSClient_NONMAPREDUCE_-253604438_2947] for 92 seconds.  Will retry shortly 
> ... java.net.ConnectException: Call From frisbee/127.0.1.1 to localhost:40618 
> failed on connection exception: java.net.ConnectException: Connection 
> refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
>   2>  at sun.reflect.GeneratedConstructorAccessor268.newInstance(Unknown 
> Source)
>   2>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ...
> {noformat}
> ...the full stack traces of these exceptions typically being 36 lines long 
> (not counting the supressed "... 17 more" at the end)
> doing some basic crunching of the "tests-report.txt" file from a recent run 
> of all "solr-core" tests (that caused the above failure) leads to some pretty 
> damn disconcerting numbers...
> {noformat}
> hossman@frisbee:~/tmp$ wc -l tests-report.txt_suite-failure-due-to-sysout.txt
> 1049177 tests-report.txt_suite-failure-due-to-sysout.txt
> hossman@frisbee:~/tmp$ grep "Suite: org.apache.solr" 
> tests-report.txt_suite-failure-due-to-sysout.txt | wc -l
> 465
> hossman@frisbee:~/tmp$ grep "LeaseRenewer.run WARN Failed to renew lease" 
> tests-report.txt_suite-failure-due-to-sysout.txt | grep 
> http://wiki.apache.org/hadoop/ConnectionRefused | wc -l
> 1988
> hossman@frisbee:~/tmp$ calc
> 1988 * 36
> 71568
> {noformat}
> So running 465 Solr test suites, we got ~2 thousand of these "Failed to renew 
> lease" WARNings.  Of the ~1 million total lines of log messages from all 
> tests, ~70 thousand (~7%) are coming from these WARNing mesages -- which can 
> evidently be safetly ignored?
> Something seems broken here.
> Someone who understands this area of the code should either:
> * investigate & fix the code/test not to have these lease renewal problems
> * tweak our test logging configs to supress these WARN messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7215) non reproducible Suite failures due to excessive sysout due to HDFS lease renewal WARN logs due to connection refused -- even if test doesn't use HDFS (ie: threads leaki

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759194#comment-16759194
 ] 

Kevin Risden commented on SOLR-7215:


Not sure what the status is here. I would guess this is either

a) not an issue any more (lots has changed with HDFS thread cleanup)
b) fixed later due to HDFS thread cleaup
c) still any issue but isn't clear that this has happened recently.

Planning to resolve this since I haven't seen this and last comment was 3+ 
years ago.

SOLR-9515 with Hadoop 3 upgrade was recent so trying to cleanup old HDFS 
related JIRAs if it isn't clear they still happen.

> non reproducible Suite failures due to excessive sysout due to HDFS lease 
> renewal WARN logs due to connection refused -- even if test doesn't use HDFS 
> (ie: threads leaking between tests)
> --
>
> Key: SOLR-7215
> URL: https://issues.apache.org/jira/browse/SOLR-7215
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: tests-report.txt_suite-failure-due-to-sysout.txt.zip
>
>
> On my local machine, i've noticed lately a lot of sporadic, non reproducible, 
> failures like these...
> {noformat}
>   2> NOTE: reproduce with: ant test  -Dtestcase=ScriptEngineTest 
> -Dtests.seed=E254A7E69EC7212A -Dtests.slow=true -Dtests.locale=sv 
> -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> [14:34:23.749] ERROR   0.00s J1 | ScriptEngineTest (suite) <<<
>> Throwable #1: java.lang.AssertionError: The test or suite printed 10984 
> bytes to stdout and stderr, even though the limit was set to 8192 bytes. 
> Increase the limit with @Limit, ignore it completely with 
> @SuppressSysoutChecks or run with -Dtests.verbose=true
>>  at __randomizedtesting.SeedInfo.seed([E254A7E69EC7212A]:0)
>>  at 
> org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
> {noformat}
> Invariably, looking at the logs of test that fail for this reason, i see 
> multiple instances of these WARN msgs...
> {noformat}
>   2> 601361 T3064 oahh.LeaseRenewer.run WARN Failed to renew lease for 
> [DFSClient_NONMAPREDUCE_-253604438_2947] for 92 seconds.  Will retry shortly 
> ... java.net.ConnectException: Call From frisbee/127.0.1.1 to localhost:40618 
> failed on connection exception: java.net.ConnectException: Connection 
> refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
>   2>  at sun.reflect.GeneratedConstructorAccessor268.newInstance(Unknown 
> Source)
>   2>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ...
> {noformat}
> ...the full stack traces of these exceptions typically being 36 lines long 
> (not counting the supressed "... 17 more" at the end)
> doing some basic crunching of the "tests-report.txt" file from a recent run 
> of all "solr-core" tests (that caused the above failure) leads to some pretty 
> damn disconcerting numbers...
> {noformat}
> hossman@frisbee:~/tmp$ wc -l tests-report.txt_suite-failure-due-to-sysout.txt
> 1049177 tests-report.txt_suite-failure-due-to-sysout.txt
> hossman@frisbee:~/tmp$ grep "Suite: org.apache.solr" 
> tests-report.txt_suite-failure-due-to-sysout.txt | wc -l
> 465
> hossman@frisbee:~/tmp$ grep "LeaseRenewer.run WARN Failed to renew lease" 
> tests-report.txt_suite-failure-due-to-sysout.txt | grep 
> http://wiki.apache.org/hadoop/ConnectionRefused | wc -l
> 1988
> hossman@frisbee:~/tmp$ calc
> 1988 * 36
> 71568
> {noformat}
> So running 465 Solr test suites, we got ~2 thousand of these "Failed to renew 
> lease" WARNings.  Of the ~1 million total lines of log messages from all 
> tests, ~70 thousand (~7%) are coming from these WARNing mesages -- which can 
> evidently be safetly ignored?
> Something seems broken here.
> Someone who understands this area of the code should either:
> * investigate & fix the code/test not to have these lease renewal problems
> * tweak our test logging configs to supress these WARN messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6008) HDFS tests are using local filesystem because solr.hdfs.home is set to a local filesystem path.

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6008:
---
Fix Version/s: (was: 6.0)
   (was: 4.9)

> HDFS tests are using local filesystem because solr.hdfs.home is set to a 
> local filesystem path.
> ---
>
> Key: SOLR-6008
> URL: https://issues.apache.org/jira/browse/SOLR-6008
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7301) HdfsDirectoryFactory does not support maprfs

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-7301.

Resolution: Duplicate

> HdfsDirectoryFactory does not support maprfs
> 
>
> Key: SOLR-7301
> URL: https://issues.apache.org/jira/browse/SOLR-7301
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Reporter: Shenghua Wan
>Priority: Minor
> Attachments: fix_support_maprfs.patch
>
>
> when map-reduce index generator was run, an exception was thrown as
> 2015-03-24 11:06:04,569 WARN org.apache.hadoop.mapred.Child: Error running 
> child
> java.lang.IllegalStateException: Failed to initialize record writer for 
> MapReduceSolrIndex, attempt_201503171620_12558_r_00_0
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.(SolrRecordWriter.java:127)
>   at 
> org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:164)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:605)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:456)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>   at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [core1]
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:507)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:172)
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.(SolrRecordWriter.java:120)
>   ... 8 more
> Caused by: org.apache.solr.common.SolrException: You must set the 
> HdfsDirectoryFactory param solr.hdfs.home for relative dataDir paths to work
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.getDataHome(HdfsDirectoryFactory.java:271)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:699)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:646)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
>   ... 11 more
> after investigation, I found the class HdfsDirectoryFactory hardcoded 
> "hdfs:/". 
> a patch is provided in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5584) Update to Guava 15.0

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759186#comment-16759186
 ] 

Kevin Risden commented on SOLR-5584:


Related to later efforts to upgrade Guava.

> Update to Guava 15.0
> 
>
> Key: SOLR-5584
> URL: https://issues.apache.org/jira/browse/SOLR-5584
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
> Fix For: 6.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5584) Update to Guava 15.0

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-5584.

Resolution: Duplicate

Marking as duplicate since there are related tickets upgrading Guava to a newer 
version that have been updated recently.

> Update to Guava 15.0
> 
>
> Key: SOLR-5584
> URL: https://issues.apache.org/jira/browse/SOLR-5584
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759191#comment-16759191
 ] 

Kevin Risden commented on SOLR-8033:


FWIW this is still like this on master. ~3 years later.

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Priority: Minor
>  Labels: newdev
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8033:
---
Component/s: hdfs
 Hadoop Integration

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Priority: Minor
>  Labels: newdev
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12869) Unit test stalling

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-12869.
-
Resolution: Duplicate

Either this is a Guava issue and that should be tracked in the linked Guava 
ticket or this was resolved in test cleanup by Mark or the HDFS tests are being 
addressed in another linked ticket.

> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
> Attachments: solr-release.diff
>
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: guava upgrade from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file (sole-release.diff) is attached 
> with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12869) Unit test stalling

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759189#comment-16759189
 ] 

Kevin Risden commented on SOLR-12869:
-

Looks like this could be related to SOLR-13060 and how HDFS doesn't terminate? 
If this issue is guava related we should focus on SOLR-11763 since it is a 
newer version of Guava upgrade. SOLR-9515 was merged to the master branch to 
upgrade to Hadoop 3. Its not clear if these issue still exist there.

> Unit test stalling
> --
>
> Key: SOLR-12869
> URL: https://issues.apache.org/jira/browse/SOLR-12869
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.4
>Reporter: Vishal
>Priority: Minor
>  Labels: test
> Attachments: solr-release.diff
>
>
> When guava dependency is upgraded from 14.0.1 to the latest version 
> (26.0-jre/25.0-jre), some unit test stall indefinitely and testing never 
> finishes up.
> For example, here HdfsNNFailoverTest stall indefinitely. Log excerpts for 
> unit test run with guava 25.0-jre:
> 13:54:39.392 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:54:39, stalled for 70.6s at: 
> HdfsNNFailoverTest (suite)
> 13:55:39.394 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:55:39, stalled for  131s at: 
> HdfsNNFailoverTest (suite)
> 13:56:39.395 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:56:39, stalled for  191s at: 
> HdfsNNFailoverTest (suite)
> 13:57:39.396 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:57:39, stalled for  251s at: 
> HdfsNNFailoverTest (suite)
> 13:58:39.398 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:58:39, stalled for  311s at: 
> HdfsNNFailoverTest (suite)
> 13:59:39.399 [QUIET] [system.out]    [junit4] HEARTBEAT J0 
> PID(258@8471261c0ae9): 2018-10-12T13:59:39, stalled for  371s at: 
> HdfsNNFailoverTest (suite)
> Note: guava upgrade from default version 14.0.1 to 25.0-jre or 26.0-jre 
> requires solr code changes. The diff file (sole-release.diff) is attached 
> with this bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10052) HdfsWriteToMultipleCollectionsTest failure

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-10052.
-
Resolution: Fixed

Marking as resolved since doesn't fail on master anymore.

> HdfsWriteToMultipleCollectionsTest failure
> --
>
> Key: SOLR-10052
> URL: https://issues.apache.org/jira/browse/SOLR-10052
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Steve Rowe
>Priority: Major
>
> My Jenkins found a reproducing branch_6x seed:
> {noformat}
> Checking out Revision 71a198ce309e35c8b31bf472b3d111dbaed276bf 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=HdfsWriteToMultipleCollectionsTest -Dtests.method=test 
> -Dtests.seed=4BBA249D2597D646 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=es-MX -Dtests.timezone=EST5EDT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 18.3s J3 | HdfsWriteToMultipleCollectionsTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4BBA249D2597D646:C3EE1B478B6E]:0)
>[junit4]>  at 
> org.apache.solr.cloud.hdfs.HdfsWriteToMultipleCollectionsTest.test(HdfsWriteToMultipleCollectionsTest.java:137)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {rnd_b=PostingsFormat(name=MockRandom), _version_=Lucene50(blocksize=128), 
> a_t=FSTOrd50, a_i=PostingsFormat(name=MockRandom), 
> id=PostingsFormat(name=MockRandom)}, docValues:{}, maxPointsInLeafNode=703, 
> maxMBSortInHeap=7.5726997055370955, 
> sim=RandomSimilarity(queryNorm=true,coord=yes): {}, locale=es-MX, 
> timezone=EST5EDT
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=13,free=290332752,total=509083648
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10308) Solr fails to work with Guava 21.0

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-10308.
-
Resolution: Duplicate

Marking as duplicate of SOLR-11260 since that focuses on upgrading Guava to a 
later version. Trying to consolidate efforts to get Guava upgraded.

> Solr fails to work with Guava 21.0
> --
>
> Key: SOLR-10308
> URL: https://issues.apache.org/jira/browse/SOLR-10308
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 6.4.2
>Reporter: Vincent Massol
>Priority: Major
> Attachments: SOLR-10308.patch
>
>
> This is what we get:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
>   at 
> org.apache.solr.handler.component.HighlightComponent.prepare(HighlightComponent.java:118)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
>   at 
> org.xwiki.search.solr.internal.AbstractSolrInstance.query(AbstractSolrInstance.java:117)
>   at 
> org.xwiki.query.solr.internal.SolrQueryExecutor.execute(SolrQueryExecutor.java:122)
>   at 
> org.xwiki.query.internal.DefaultQueryExecutorManager.execute(DefaultQueryExecutorManager.java:72)
>   at 
> org.xwiki.query.internal.SecureQueryExecutorManager.execute(SecureQueryExecutorManager.java:67)
>   at org.xwiki.query.internal.DefaultQuery.execute(DefaultQuery.java:287)
>   at org.xwiki.query.internal.ScriptQuery.execute(ScriptQuery.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384)
>   at 
> org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173)
>   ... 183 more
> {noformat}
> Guava 21 has removed some signature that solr is currently using.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5584) Update to Guava 15.0

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-5584:
---
Fix Version/s: (was: 6.0)

> Update to Guava 15.0
> 
>
> Key: SOLR-5584
> URL: https://issues.apache.org/jira/browse/SOLR-5584
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9075) Look at using hdfs-client jar for smaller core dependency

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9075:
---
Component/s: hdfs
 Hadoop Integration

> Look at using hdfs-client jar for smaller core dependency
> -
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9075) Look at using hdfs-client jar for smaller core dependency

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9075:
---
Summary: Look at using hdfs-client jar for smaller core dependency  (was: 
Look at using hdfs-client jar in HDFS 2.8 for smaller core dependency.)

> Look at using hdfs-client jar for smaller core dependency
> -
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10052) HdfsWriteToMultipleCollectionsTest failure

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759184#comment-16759184
 ] 

Kevin Risden commented on SOLR-10052:
-

Seed doesn't reproduce on master anymore. This could be due to the Hadoop 3 
upgrade from SOLR-9515. I did not look at any of the 7.x branches.

ant test  -Dtestcase=HdfsWriteToMultipleCollectionsTest -Dtests.method=test 
-Dtests.seed=4BBA249D2597D646 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=es-MX -Dtests.timezone=EST5EDT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

> HdfsWriteToMultipleCollectionsTest failure
> --
>
> Key: SOLR-10052
> URL: https://issues.apache.org/jira/browse/SOLR-10052
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Steve Rowe
>Priority: Major
>
> My Jenkins found a reproducing branch_6x seed:
> {noformat}
> Checking out Revision 71a198ce309e35c8b31bf472b3d111dbaed276bf 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=HdfsWriteToMultipleCollectionsTest -Dtests.method=test 
> -Dtests.seed=4BBA249D2597D646 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=es-MX -Dtests.timezone=EST5EDT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 18.3s J3 | HdfsWriteToMultipleCollectionsTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4BBA249D2597D646:C3EE1B478B6E]:0)
>[junit4]>  at 
> org.apache.solr.cloud.hdfs.HdfsWriteToMultipleCollectionsTest.test(HdfsWriteToMultipleCollectionsTest.java:137)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {rnd_b=PostingsFormat(name=MockRandom), _version_=Lucene50(blocksize=128), 
> a_t=FSTOrd50, a_i=PostingsFormat(name=MockRandom), 
> id=PostingsFormat(name=MockRandom)}, docValues:{}, maxPointsInLeafNode=703, 
> maxMBSortInHeap=7.5726997055370955, 
> sim=RandomSimilarity(queryNorm=true,coord=yes): {}, locale=es-MX, 
> timezone=EST5EDT
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=13,free=290332752,total=509083648
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759180#comment-16759180
 ] 

Kevin Risden commented on SOLR-12040:
-

I'm curious if these failures get better/worse with Hadoop 3 after SOLR-9515. 

> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10052) HdfsWriteToMultipleCollectionsTest failure

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10052:

Component/s: Tests
 hdfs
 Hadoop Integration

> HdfsWriteToMultipleCollectionsTest failure
> --
>
> Key: SOLR-10052
> URL: https://issues.apache.org/jira/browse/SOLR-10052
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Steve Rowe
>Priority: Major
>
> My Jenkins found a reproducing branch_6x seed:
> {noformat}
> Checking out Revision 71a198ce309e35c8b31bf472b3d111dbaed276bf 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=HdfsWriteToMultipleCollectionsTest -Dtests.method=test 
> -Dtests.seed=4BBA249D2597D646 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=es-MX -Dtests.timezone=EST5EDT -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 18.3s J3 | HdfsWriteToMultipleCollectionsTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4BBA249D2597D646:C3EE1B478B6E]:0)
>[junit4]>  at 
> org.apache.solr.cloud.hdfs.HdfsWriteToMultipleCollectionsTest.test(HdfsWriteToMultipleCollectionsTest.java:137)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {rnd_b=PostingsFormat(name=MockRandom), _version_=Lucene50(blocksize=128), 
> a_t=FSTOrd50, a_i=PostingsFormat(name=MockRandom), 
> id=PostingsFormat(name=MockRandom)}, docValues:{}, maxPointsInLeafNode=703, 
> maxMBSortInHeap=7.5726997055370955, 
> sim=RandomSimilarity(queryNorm=true,coord=yes): {}, locale=es-MX, 
> timezone=EST5EDT
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=13,free=290332752,total=509083648
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11010:

Component/s: Tests
 Hadoop Integration

> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Affects Versions: 7.0, 8.0
>Reporter: Andrzej Bialecki 
>Priority: Major
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(Queue

[jira] [Resolved] (SOLR-6423) HdfsCollectionsAPIDistributedZkTest test fail: Could not find new collection awholynewcollection_1

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-6423.

Resolution: Fixed

Marking as resolved since haven't seen this recently and has comment says this 
hasn't been seen either.

> HdfsCollectionsAPIDistributedZkTest test fail: Could not find new collection 
> awholynewcollection_1
> --
>
> Key: SOLR-6423
> URL: https://issues.apache.org/jira/browse/SOLR-6423
> Project: Solr
>  Issue Type: Test
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> {noformat}
> java.lang.AssertionError: Could not find new collection awholynewcollection_1
>   at 
> __randomizedtesting.SeedInfo.seed([655D020D02309D33:E4BB8C15756FFD0F]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkForCollection(AbstractFullDistribZkTestBase.java:1642)
>   at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:723)
>   at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:203)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12040:

Component/s: hdfs
 Hadoop Integration

> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12040:

Component/s: Tests

> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-02 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12330:

Description: 
Just encounter such weird behaviour, will recheck and followup. 
 {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
impossible to guess the reason.
-It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
just silently ignored. Turns out it's ok see SOLR-9682

  was:
Just encounter such weird behaviour, will recheck and followup. 
{{"filter":["\{!v=$bogus}"]}} responds back with just NPE which makes 
impossible to guess the reason.
It might be even worse, since {{"filter":[\{"param":"bogus"}]}} seems like just 
silently ignored.
Once agin, I'll double check. 


> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
>  {{"filter":["\\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> -It might be even worse, since- {{"filter":[\\{"param":"bogus"}]}} seems like 
> just silently ignored. Turns out it's ok see SOLR-9682



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10308) Solr fails to work with Guava 21.0

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759179#comment-16759179
 ] 

Kevin Risden commented on SOLR-10308:
-

SOLR-9515 was merged into master and should be in Solr 8.0. 

 

We should probably focus on SOLR-11260 though since that is for a newer version 
of Guava

> Solr fails to work with Guava 21.0
> --
>
> Key: SOLR-10308
> URL: https://issues.apache.org/jira/browse/SOLR-10308
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 6.4.2
>Reporter: Vincent Massol
>Priority: Major
> Attachments: SOLR-10308.patch
>
>
> This is what we get:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
>   at 
> org.apache.solr.handler.component.HighlightComponent.prepare(HighlightComponent.java:118)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
>   at 
> org.xwiki.search.solr.internal.AbstractSolrInstance.query(AbstractSolrInstance.java:117)
>   at 
> org.xwiki.query.solr.internal.SolrQueryExecutor.execute(SolrQueryExecutor.java:122)
>   at 
> org.xwiki.query.internal.DefaultQueryExecutorManager.execute(DefaultQueryExecutorManager.java:72)
>   at 
> org.xwiki.query.internal.SecureQueryExecutorManager.execute(SecureQueryExecutorManager.java:67)
>   at org.xwiki.query.internal.DefaultQuery.execute(DefaultQuery.java:287)
>   at org.xwiki.query.internal.ScriptQuery.execute(ScriptQuery.java:237)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.doInvoke(UberspectImpl.java:395)
>   at 
> org.apache.velocity.util.introspection.UberspectImpl$VelMethodImpl.invoke(UberspectImpl.java:384)
>   at 
> org.apache.velocity.runtime.parser.node.ASTMethod.execute(ASTMethod.java:173)
>   ... 183 more
> {noformat}
> Guava 21 has removed some signature that solr is currently using.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk s

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-13060:
---

Assignee: Kevin Risden

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-13060.patch, 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759178#comment-16759178
 ] 

Kevin Risden commented on SOLR-13074:
-

Based on a bunch of comments in MoveReplicaTest - i would bet there are more 
problems with that base class than HDFS integration itself. MoveReplicaHDFSTest 
just happens to pick up all those bugs plus needs to deal with HDFS. There also 
seems to be HDFS specific handling in MoveReplica which to me doesn't make any 
sense.

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Dawid Weiss
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-13074.patch
>
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-02-02 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759176#comment-16759176
 ] 

Dawid Weiss commented on SOLR-13074:


Go ahead and assign this to yourself, Kevin. I don't know much about this 
stuff, really.

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Dawid Weiss
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-13074.patch
>
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-02 Thread Cesar Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759168#comment-16759168
 ] 

Cesar Rodriguez commented on SOLR-12330:


[~mkhludnev], in regard to the following:

{quote}I suppose we can't hunt for those NPE rows one by one, but rater wrap 
FacetModule invocation with catch(Exception e) {throw new 
SolrException(...,e);}{quote}

I believe you are already aware, but in case it helps, among the 28 issues we 
have already submitted this week, 6 of them seem to be related to the facet 
module:

https://issues.apache.org/jira/browse/SOLR-13206?jql=labels%20%3D%20diffblue%20AND%20text%20~%20%22facet%22

And 3 of them are NPEs. Each of those tickets contains a stack trace.

Would it be useful if we run a more detailed analysis in this module, aiming at 
finding more NPEs or other issues? Cf. my mail to the developer's mailing list 
last Monday.

> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
> {{"filter":["\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> It might be even worse, since {{"filter":[\{"param":"bogus"}]}} seems like 
> just silently ignored.
> Once agin, I'll double check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract + delegate seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-02-02 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759175#comment-16759175
 ] 

David Smiley commented on LUCENE-8662:
--

This issue is highly related if not the same issue as LUCENE-8292 that 
[~bruno.roustant] and I faced last year.  The troubles are not limited to 
seekExact but extend to the termState variant with termState() method as 
well... see the last few comments:  
https://issues.apache.org/jira/browse/LUCENE-8292?focusedCommentId=16475579&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16475579

Ugh!

> Change TermsEnum.seekExact(BytesRef) to abstract + delegate 
> seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Solr uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional

Re: Reporting multiple issues triggering an HTTP 500 in Solr

2019-02-02 Thread César Rodríguez
Among the 77 exceptions we described in the blog post [1] we have now
created tickets for:

- 24 different null pointer exceptions
- 4 cast exceptions (one has been fixed in the mean time)
- 3 array index out of bounds
- 2 string index out of bounds

Turned out that some of them could be merged into the same report
(e.g. SOLR-13202), so this amounted to 28 different bug reports (3 of
them in Lucene). Following Jan Høydahl's suggestion (thanks!) we have
labelled all of them with 'diffblue' and 'newdev', so all issues can
be found here:

https://issues.apache.org/jira/issues/?jql=labels+%3D+diffblue

Now, as described in the blog post [1] we still have a bunch of HTTP
requests that produce an HTTP 500 for:

- 19 NumberFormatException
- 9 SolrException
- 4 IllegalArgumentException
- 4 IOException
- 3 IllegalStateException
- 2 UnsupportedOperationException
- 1 RuntimeException
- 1 org.noggit.JSONParser.ParserException

I agree with Jan that they correspond to expected and wanted errors,
but should they produce a 400 instead of a 500? In my view a service
should never return an HTTP 500, even if you throw invalid or unusual
requests at it. What people think?

At any rate, does the community want the URLs that trigger those
exceptions? If the fix is going to be catching all of them at the
highest level in the servlet, then they are probably not necessary.
But if you want to fix (some of) them case by case, and provide
explanatory error responses for missing parameters, invalid data, etc.
then these are useful, as you can use them to start a debugging
session to understand where is the best to catch the exception.

Also, please remark that the 'Environment' field of all reports we did
contains instructions how to rebuild the films collections so that the
reported problem are easy to debug.

[1] 
https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br




On Thu, Jan 31, 2019 at 12:37 PM Jan Høydahl  wrote:
>
> Thanks for the reports, it will be good to get rid of NPEs even if the input 
> values are not likely to happen IRL.
> Reading your blog post, at the bottom you have a table with various kinds of 
> exceptions triggering 500 responses.
> Out of those, most are expected and wanted errors in my opinion, such as 
> NumberFormatException or IllegalArgumentException,
> they clearly tell the client that their input is wrong. Also, SolrException 
> is in most cases thrown to explain what went wrong.
>
> But for NPE, ClassCastException, AIOOB and SIOOB exceptions are good 
> candidates for better input validation and fixing.
>
> BTW, here is a JIRA query that will list all your reported issues, in case 
> someone want to fix multiple of them in one single commit
> https://issues.apache.org/jira/browse/SOLR-13180?jql=project%20%3D%20SOLR%20AND%20text%20~%20diffblue
>
> You may also consider adding a label=diffblue as well to display all issues 
> with a click, as well as label=newdev to signal that these are excellent 
> tasks to be done by new Solr/Lucene developers.
>
> Instead of attaching large home.zip archives to the issues, it would be much 
> more helpful with a few simple steps as {code} block in the JIRA to 
> reproduce, e.g.
>
> bin/solr start -c
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> bin/post -c films example/films/films.json
> curl "http://localhost:8983/solr/films/select?json=0";
>
> Note that the last line there reproduces the exception.
> This allows any developer to simply copy/paste those five lines to reproduce 
> :)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 28. jan. 2019 kl. 17:06 skrev César Rodríguez :
>
> Thanks, we will do that.
>
> Just to be clear, we are talking about opening from 50 to 70 jira
> tickets. We found 77 unique points in the source code where an
> exception is thrown that causes an HTTP 500, but I'm guessing that
> some of them will not be serious enough to be reported.
>
> We can provide patches for the two issues described below. We will do
> our best to describe the probable cause of the error on each
> individual report, but we won't be able to provide patches for most of
> them.
>
> More information about this testing effort can be found here:
> https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results
>
> On Mon, Jan 28, 2019 at 3:03 PM Mikhail Khludnev  wrote:
>
>
> Yes. Please create jiras and attach patches. Tests are highly appreciated.
>
>
> On Mon, Jan 28, 2019 at 4:49 PM César Rodríguez 
>  wrote:
>
>
> Hi there,
>
> We analyzed the source code of Apache Solr and found a number of
> issues that we would like to report. We configured Solr using the
> films collection from the quick start tutorial

[jira] [Commented] (SOLR-9682) Ability to specify a query with a parameter name (in facet filter)

2019-02-02 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759171#comment-16759171
 ] 

Yonik Seeley commented on SOLR-9682:


> What it someone make a typo when attempting to filter out some explicit 
> content?

If someone adds a filter and it doesn't work, the filter (and how it's 
specified via param) will be the first thing they look at (hence a typo should 
be easy to debug).  Removing a feature to allow detecting of one very specific 
typo doesn't seem like a good trade-off in this specific scenario.

It's a common scenario to want to filter if one is provided.  It makes it 
easier to have a request that doesn't have to be modified as much based on the 
absence/presence of other parameters.

Also, "Multi-valued parameters should be supported." was part of the objective. 
 So the parameter refers to a list of filters... and allowing "0 or more" for a 
list is more flexible than "you're not allowed to have a 0 length list".


> Ability to specify a query with a parameter name (in facet filter)
> --
>
> Key: SOLR-9682
> URL: https://issues.apache.org/jira/browse/SOLR-9682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 6.4, 7.0
>
> Attachments: SOLR-9682.patch
>
>
> Currently, "filter" only supports query strings (examples at 
> http://yonik.com/solr-json-request-api/ )
> It would be nice to be able to reference a param that would be parsed as a 
> lucene/solr query.  Multi-valued parameters should be supported.
> We should keep in mind (and leave room for) a future "JSON Query Syntax" and 
> chose labels appropriately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759162#comment-16759162
 ] 

Kevin Risden commented on SOLR-13060:
-

Attached patch that tries to address the issues laid out here. Basically the 
HDFS* versions now just setup HDFS with the right configset.

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: SOLR-13060.patch, 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-13074:
---

  Assignee: Kevin Risden  (was: Dawid Weiss)
Attachment: SOLR-13074.patch

Patch that tries to improve MoveReplicaHDFSTest

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Dawid Weiss
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-13074.patch
>
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk sp

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13060:

Attachment: SOLR-13060.patch

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: SOLR-13060.patch, 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12291) Async prematurely reports completed state that causes severe shard loss

2019-02-02 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12291:

Summary: Async prematurely reports completed state that causes severe shard 
loss  (was: OverseerCollectionMessageHandler sliceCmd assumes only one replica 
exists on each node)

> Async prematurely reports completed state that causes severe shard loss
> ---
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-02-02 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759155#comment-16759155
 ] 

Mark Miller commented on SOLR-13189:


To try and be extra clear.

My patch is intended to prove to you that my theory is correct and that 
following the system rules allows this test to pass with fails injected.

By coincidence, my patch does something we need to start doing - change our old 
style clustered verification test methods to work with new style tests to 
reduce duplication and move old style test to the new style tests.

We should inject random fails, but only in specific tests that check things 
like my patch does.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9682) Ability to specify a query with a parameter name (in facet filter)

2019-02-02 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759145#comment-16759145
 ] 

Mikhail Khludnev edited comment on SOLR-9682 at 2/2/19 7:31 PM:


[~ysee...@gmail.com] Would you comment on the [filter 
missing_param|https://github.com/apache/lucene-solr/blame/17563ce81f26cb2844c60ba26b1627f6f7e8908d/solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java#L1535]?
 Why it's silently ignored? My take on that is: referring to absent tag is ok 
but referring to absent parameter is an error, which should be explicitly 
thrown. What it someone make a typo when attempting to filter out some explicit 
content? One should not ignore that.

the topic is discussed by [~munendrasn] at SOLR-12330

Thanks.


was (Author: mkhludnev):
Would you comment on the [filter 
missing_param|https://github.com/apache/lucene-solr/blame/17563ce81f26cb2844c60ba26b1627f6f7e8908d/solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java#L1535]?
 Why it's silently ignored? My take on that is: referring to absent tag is ok 
but referring to absent parameter is an error, which should be explicitly 
thrown. What it someone make a typo when attempting to filter out some explicit 
content? One should not ignore that.   

the topic is 
[discussed|https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759135#comment-16759135]
 by [~munendrasn] at SOLR-12330  

Thanks. 

> Ability to specify a query with a parameter name (in facet filter)
> --
>
> Key: SOLR-9682
> URL: https://issues.apache.org/jira/browse/SOLR-9682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 6.4, 7.0
>
> Attachments: SOLR-9682.patch
>
>
> Currently, "filter" only supports query strings (examples at 
> http://yonik.com/solr-json-request-api/ )
> It would be nice to be able to reference a param that would be parsed as a 
> lucene/solr query.  Multi-valued parameters should be supported.
> We should keep in mind (and leave room for) a future "JSON Query Syntax" and 
> chose labels appropriately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12291) Async prematurely reports completed state that causes severe shard loss

2019-02-02 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-12291:
---

Assignee: Mikhail Khludnev

> Async prematurely reports completed state that causes severe shard loss
> ---
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759149#comment-16759149
 ] 

Kevin Risden commented on SOLR-13060:
-

So I looked at this and the way the HDFS versions of the tests are written 
doesn't really make any sense. I have at least the 
HdfsCollectionsAPIDistributedZkTest test passing it looks like now (turns out 
it had some parts of MoveReplica in it???). HdfsAutoAddReplicasIntegrationTest 
is still a bit flaky but it does pass sometimes (might be a timing issue?). 

MoveReplicaHDFSTest I'll look at in SOLR-13074.

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12080) Frequent failures of MoveReplicaHDFSTest.testFailedMove

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12080:

Component/s: Tests
 Hadoop Integration

> Frequent failures of MoveReplicaHDFSTest.testFailedMove
> ---
>
> Key: SOLR-12080
> URL: https://issues.apache.org/jira/browse/SOLR-12080
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: jenkins.log.txt.gz
>
>
> This test frequently fails. This is one of the failing seeds:
> {code}
>[junit4]   2> 129275 INFO  (qtp1647120030-248) [n:127.0.0.1:55469_solr 
> c:MoveReplicaHDFSTest_failed_coll_true s:shard2 r:core_node7 
> x:MoveReplicaHDFSTest_failed_coll_true_shard2_replica_n4] o.a.s.c.S.Request 
> [MoveReplicaHDFSTest_failed_coll_true_shard2_replica_n4]  webapp=/solr 
> path=/select 
> params={q=*:*&_stateVer_=MoveReplicaHDFSTest_failed_coll_true:9&wt=javabin&version=2}
>  status=503 QTime=0
>[junit4]   2> 129278 ERROR (qtp148844424-682) [n:127.0.0.1:54855_solr 
> c:MoveReplicaHDFSTest_failed_coll_true s:shard2 r:core_node8 
> x:MoveReplicaHDFSTest_failed_coll_true_shard2_replica_n6] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: no servers 
> hosting shard: shard1
>[junit4]   2>  at 
> org.apache.solr.handler.component.HttpShardHandler.prepDistributed(HttpShardHandler.java:436)
>[junit4]   2>  at 
> org.apache.solr.handler.component.SearchHandler.getAndPrepShardHandler(SearchHandler.java:226)
>[junit4]   2>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:264)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>[junit4]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:527)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:530)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
>[junit4]   2>  at 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:1

[jira] [Commented] (SOLR-9682) Ability to specify a query with a parameter name (in facet filter)

2019-02-02 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759145#comment-16759145
 ] 

Mikhail Khludnev commented on SOLR-9682:


Would you comment on the [filter 
missing_param|https://github.com/apache/lucene-solr/blame/17563ce81f26cb2844c60ba26b1627f6f7e8908d/solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java#L1535]?
 Why it's silently ignored? My take on that is: referring to absent tag is ok 
but referring to absent parameter is an error, which should be explicitly 
thrown. What it someone make a typo when attempting to filter out some explicit 
content? One should not ignore that.   

the topic is 
[discussed|https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759135#comment-16759135]
 by [~munendrasn] at SOLR-12330  

Thanks. 

> Ability to specify a query with a parameter name (in facet filter)
> --
>
> Key: SOLR-9682
> URL: https://issues.apache.org/jira/browse/SOLR-9682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Fix For: 6.4, 7.0
>
> Attachments: SOLR-9682.patch
>
>
> Currently, "filter" only supports query strings (examples at 
> http://yonik.com/solr-json-request-api/ )
> It would be nice to be able to reference a param that would be parsed as a 
> lucene/solr query.  Multi-valued parameters should be supported.
> We should keep in mind (and leave room for) a future "JSON Query Syntax" and 
> chose labels appropriately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12330) Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) either reported too little and even might be ignored

2019-02-02 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759135#comment-16759135
 ] 

Munendra S N commented on SOLR-12330:
-

[~mkhludnev]

For the 3rd case(*filter:[\{param:f2\}]*), there are two test cases where it is 
expected to pass even when f2 is not specified. I'm not sure if this is an 
intended test behavior or side effect. Let me know if this needs to be changed 
or kept as it is.
{code:java}
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/search/facet/TestJsonFacets.java#L3149
{code}


> Referencing non existing parameter in JSON Facet "filter" (and/or other NPEs) 
> either reported too little and even might be ignored 
> ---
>
> Key: SOLR-12330
> URL: https://issues.apache.org/jira/browse/SOLR-12330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.3
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12330.patch, SOLR-12330.patch, SOLR-12330.patch
>
>
> Just encounter such weird behaviour, will recheck and followup. 
> {{"filter":["\{!v=$bogus}"]}} responds back with just NPE which makes 
> impossible to guess the reason.
> It might be even worse, since {{"filter":[\{"param":"bogus"}]}} seems like 
> just silently ignored.
> Once agin, I'll double check. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2480: POMs out of sync

2019-02-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2480/

No tests ran.

Build Log:
[...truncated 32702 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:679: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 17 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 3165 - Still Unstable

2019-02-02 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3165/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClientTest.testCollectionParameters

Error Message:
Captured an uncaught exception in thread: Thread[id=2013, 
name=h2sc-1283-thread-13, state=RUNNABLE, 
group=TGRP-ConcurrentUpdateHttp2SolrClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2013, name=h2sc-1283-thread-13, state=RUNNABLE, 
group=TGRP-ConcurrentUpdateHttp2SolrClientTest]
at 
__randomizedtesting.SeedInfo.seed([E9B29D7F21FC8E55:DC96B81A501BD757]:0)
Caused by: java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7ae8ad1 
rejected from 
java.util.concurrent.ScheduledThreadPoolExecutor@9697454[Terminated, pool size 
= 0, active threads = 0, queued tasks = 0, completed tasks = 0]
at __randomizedtesting.SeedInfo.seed([E9B29D7F21FC8E55]:0)
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at 
java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326)
at 
java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533)
at 
org.eclipse.jetty.util.thread.ScheduledExecutorScheduler.schedule(ScheduledExecutorScheduler.java:102)
at 
org.eclipse.jetty.util.SocketAddressResolver$Async.lambda$resolve$1(SocketAddressResolver.java:154)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16361 lines...]
   [junit4] Suite: 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClientTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J2/temp/solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClientTest_E9B29D7F21FC8E55-001/init-core-data-001
   [junit4]   2> 151947 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 151949 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 151950 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 151989 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 151990 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.SolrTestCaseJ4 initCore end
   [junit4]   2> 151990 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J2/temp/solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClientTest_E9B29D7F21FC8E55-001/tempDir-002/cores/core
   [junit4]   2> 151992 WARN  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 151992 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 151992 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 151993 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: 
c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12
   [junit4]   2> 151994 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 151994 INFO  
(SUITE-ConcurrentUpdateHttp2SolrClientTest-seed#[E9B29D7F21FC8E55]-worker) [
] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 151994 INFO  
(SUITE-ConcurrentUpdateH

[jira] [Updated] (SOLR-6373) Unit tests for secure (kerberos-based) HDFS

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6373:
---
Component/s: Tests

> Unit tests for secure (kerberos-based) HDFS
> ---
>
> Key: SOLR-6373
> URL: https://issues.apache.org/jira/browse/SOLR-6373
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, security, Tests
>Reporter: Gregory Chanan
>Priority: Major
>
> The HdfsDirectoryFactory has support for reading/writing secure HDFS, but we 
> currently have no unit tests that exercise this functionality.  It should be 
> possible to write them using Hadoop's MiniKDC, but I haven't investigated 
> that in depth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6373) Unit tests for secure (kerberos-based) HDFS

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6373:
---
Component/s: security
 hdfs
 Hadoop Integration

> Unit tests for secure (kerberos-based) HDFS
> ---
>
> Key: SOLR-6373
> URL: https://issues.apache.org/jira/browse/SOLR-6373
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, security
>Reporter: Gregory Chanan
>Priority: Major
>
> The HdfsDirectoryFactory has support for reading/writing secure HDFS, but we 
> currently have no unit tests that exercise this functionality.  It should be 
> possible to write them using Hadoop's MiniKDC, but I haven't investigated 
> that in depth.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9889) Add multi-node Solrcloud unit tests for kerberos auth

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9889:
---
Component/s: Tests
 Hadoop Integration

> Add multi-node Solrcloud unit tests for kerberos auth
> -
>
> Key: SOLR-9889
> URL: https://issues.apache.org/jira/browse/SOLR-9889
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Hrishikesh Gadre
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11763) Upgrade Guava to 23.0

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759131#comment-16759131
 ] 

Kevin Risden edited comment on SOLR-11763 at 2/2/19 5:50 PM:
-

Might be worth looking into again since Hadoop 3 was upgraded to in SOLR-9515. 
We might be able up upgrade guava and not have to deal with older version as 
long as the code paths we need aren't affected. HADOOP-10101 and HADOOP-15272 
and HADOOP-15960 might have some pointers there.


was (Author: risdenk):
Might be worth looking into again since Hadoop 3 was upgraded to in SOLR-9515. 
We might be able up upgrade guava and not have to deal with older version as 
long as the code paths we need aren't affected. HADOOP-10101 and HADOOP-15272 
might have some pointers there.

> Upgrade Guava to 23.0
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11763) Upgrade Guava to 23.0

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759131#comment-16759131
 ] 

Kevin Risden commented on SOLR-11763:
-

Might be worth looking into again since Hadoop 3 was upgraded to in SOLR-9515. 
We might be able up upgrade guava and not have to deal with older version as 
long as the code paths we need aren't affected. HADOOP-10101 and HADOOP-15272 
might have some pointers there.

> Upgrade Guava to 23.0
> -
>
> Key: SOLR-11763
> URL: https://issues.apache.org/jira/browse/SOLR-11763
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11763.patch, SOLR-11763.patch, SOLR-11763.patch
>
>
> Our code is running into version conflicts with Solr's old Guava dependency. 
> This fixes it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759125#comment-16759125
 ] 

Kevin Risden commented on SOLR-13060:
-

[~dweiss] - Thanks. Yea I commented about MoveReplicaHDFSTest in SOLR-13074.

I see a bunch of "namenode low on available disk space" messages before 
failures for HdfsCollectionsAPIDistributedZkTest.
{code:java}
ant test  -Dtestcase=HdfsCollectionsAPIDistributedZkTest 
-Dtests.seed=B6F9375F80C2F760 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.awaitsfix=true -Dtests.badapples=true -Dtests.locale=en-SG 
-Dtests.timezone=America/Winnipeg -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}
The above fails for me repeatedly.

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7301) HdfsDirectoryFactory does not support maprfs

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-7301:
---
Component/s: Hadoop Integration

> HdfsDirectoryFactory does not support maprfs
> 
>
> Key: SOLR-7301
> URL: https://issues.apache.org/jira/browse/SOLR-7301
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Reporter: Shenghua Wan
>Priority: Minor
> Attachments: fix_support_maprfs.patch
>
>
> when map-reduce index generator was run, an exception was thrown as
> 2015-03-24 11:06:04,569 WARN org.apache.hadoop.mapred.Child: Error running 
> child
> java.lang.IllegalStateException: Failed to initialize record writer for 
> MapReduceSolrIndex, attempt_201503171620_12558_r_00_0
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.(SolrRecordWriter.java:127)
>   at 
> org.apache.solr.hadoop.SolrOutputFormat.getRecordWriter(SolrOutputFormat.java:164)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:605)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:456)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>   at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [core1]
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:507)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.createEmbeddedSolrServer(SolrRecordWriter.java:172)
>   at 
> org.apache.solr.hadoop.SolrRecordWriter.(SolrRecordWriter.java:120)
>   ... 8 more
> Caused by: org.apache.solr.common.SolrException: You must set the 
> HdfsDirectoryFactory param solr.hdfs.home for relative dataDir paths to work
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.getDataHome(HdfsDirectoryFactory.java:271)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:699)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:646)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
>   ... 11 more
> after investigation, I found the class HdfsDirectoryFactory hardcoded 
> "hdfs:/". 
> a patch is provided in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11381) HdfsDirectoryFactory throws NPE on cleanup because file system has been closed

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11381:

Component/s: Hadoop Integration

> HdfsDirectoryFactory throws NPE on cleanup because file system has been closed
> --
>
> Key: SOLR-11381
> URL: https://issues.apache.org/jira/browse/SOLR-11381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 7.6, 8.0
>
>
> I saw this happening on tests related to autoscaling. The old directory clean 
> up is triggered on core close in a separate thread. This can cause a race 
> condition where the filesystem is closed before the cleanup starts running. 
> Then a NPE is thrown and cleanup fails.
> Fixing the NPE is simple but I think this is a real bug where old directories 
> can be left around on HDFS. I don't know enough about HDFS to investigate 
> further. Leaving it here for interested people to pitch in.
> {code}
> 105029 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Error 
> checking for old index directories to clean-up.
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2083)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2069)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:540)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> 105030 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore Failed to cleanup 
> old index directories for core control_collection_shard1_replica_n1
> java.lang.NullPointerException
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:558)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759130#comment-16759130
 ] 

Kevin Risden commented on SOLR-13060:
-

Similar errors with HdfsAutoAddReplicasIntegrationTest.
{code:java}
ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.seed=BD8B35758F984DD8 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.awaitsfix=true -Dtests.badapples=true -Dtests.locale=ar-QA 
-Dtests.timezone=Antarctica/Palmer -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}
The above also fails for me repeatedly.

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7321) Remove reflection in FSHDFSUtils.java

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759127#comment-16759127
 ] 

Kevin Risden commented on SOLR-7321:


This still looks valid on master. 

> Remove reflection in FSHDFSUtils.java
> -
>
> Key: SOLR-7321
> URL: https://issues.apache.org/jira/browse/SOLR-7321
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, SolrCloud
>Reporter: Mike Drob
>Priority: Major
> Attachments: SOLR-7321.patch
>
>
> When we copied FSHDFSUtil from HBase in SOLR-6969 we also carried over their 
> compatability shims for both Hadoop 1 and Hadoop 2. Since we only support 
> Hadoop 2, we don't need to do reflection in this class and can just invoke 
> the methods directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759117#comment-16759117
 ] 

Kevin Risden commented on SOLR-13060:
-

I'm curious if these issues still exist with Hadoop 3 which was just merged to 
master/8x in SOLR-9515. I can take a look at the 3 annotated tests.

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk sp

2019-02-02 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13060:

Component/s: Hadoop Integration

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9459) Upgrade dependencies

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759124#comment-16759124
 ] 

Kevin Risden commented on SOLR-9459:


Might be worth looking at this again with Hadoop 3 in master/8.x/8.0. I 
commented on SOLR-9079 about commons-lang3. commons-collections still exists in 
Hadoop 3 but didn't look much closer.

> Upgrade dependencies
> 
>
> Key: SOLR-9459
> URL: https://issues.apache.org/jira/browse/SOLR-9459
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Petar Tahchiev
>Priority: Major
> Attachments: commons-lang3.patch
>
>
> Hello,
> my project has more than 400 dependencies and I'm trying to ban the usage of 
> {{commons-collecrtions}} and {{commons-lang}} in favor of 
> {{org.apache.commons:commons-collection4}} and 
> {{org.apache.commons:commons-lang3}}. Unfortunately out of the 400 
> dependencies *only* solr is still using the old {{collections}} and {{lang}} 
> dependencies which are more than 6 years old.
> Is there a specific reason for that? Can you please update to the latest 
> versions:
> http://repo1.maven.org/maven2/org/apache/commons/commons-lang3/
> http://repo1.maven.org/maven2/org/apache/commons/commons-collections4/
> http://repo1.maven.org/maven2/org/apache/commons/commons-configuration2/
> http://repo1.maven.org/maven2/org/apache/commons/commons-io/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759121#comment-16759121
 ] 

Kevin Risden commented on SOLR-13074:
-

The reproduce with from the top fails for me:

 
{code:java}
[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=MoveReplicaHDFSTest 
-Dtests.seed=DC1CE772C445A55D -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.awaitsfix=true -Dtests.badapples=true 
-Dtests.locale=fr -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
[junit4] ERROR 0.00s | MoveReplicaHDFSTest (suite) <<<
[junit4] > Throwable #1: java.lang.NullPointerException
[junit4] > at __randomizedtesting.SeedInfo.seed([DC1CE772C445A55D]:0)
[junit4] > at 
org.apache.solr.cloud.SolrCloudTestCase.zkClient(SolrCloudTestCase.java:227)
[junit4] > at 
org.apache.solr.cloud.MoveReplicaHDFSTest.setupClass(MoveReplicaHDFSTest.java:55)
[junit4] > at java.lang.Thread.run(Thread.java:748)
{code}
 

 

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13060) Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job timeout, causing Jenkins to kill JVMs, causing dump files to be created that fill all disk

2019-02-02 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759120#comment-16759120
 ] 

Dawid Weiss commented on SOLR-13060:


MoveReplicaHDFSTest is just plain broken to me at infrastructural level (how 
things are started up/ cleaned up).

> Some Nightly HDFS tests never terminate on ASF Jenkins, triggering whole-job 
> timeout, causing Jenkins to kill JVMs, causing dump files to be created that 
> fill all disk space, causing failure of all following jobs on the same node
> -
>
> Key: SOLR-13060
> URL: https://issues.apache.org/jira/browse/SOLR-13060
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: 
> junit4-J0-20181210_065854_4175881849742830327151.spill.part1.gz
>
>
> The 3 tests that are affected: 
> * HdfsAutoAddReplicasIntegrationTest
> * HdfsCollectionsAPIDistributedZkTest
> * MoveReplicaHDFSTest 
> Instances from the dev list:
> 12/1: 
> https://lists.apache.org/thread.html/e04ad0f9113e15f77393ccc26e3505e3090783b1d61bd1c7ff03d33e@%3Cdev.lucene.apache.org%3E
> 12/5: 
> https://lists.apache.org/thread.html/d78c99255abfb5134803c2b77664c1a039d741f92d6e6fcbcc66cd14@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/92ad03795ae60b1e94859d49c07740ca303f997ae2532e6f079acfb4@%3Cdev.lucene.apache.org%3E
> 12/8: 
> https://lists.apache.org/thread.html/26aace512bce0b51c4157e67ac3120f93a99905b40040bee26472097@%3Cdev.lucene.apache.org%3E
> 12/11: 
> https://lists.apache.org/thread.html/33558a8dd292fd966a7f476bf345b66905d99f7eb9779a4d17b7ec97@%3Cdev.lucene.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Upgrade commons-lang to version 3.x

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759122#comment-16759122
 ] 

Kevin Risden commented on SOLR-9079:


It looks like Hadoop 3 doesn't rely on commons-lang anymore and only 
commons-lang3. I don't know if there is anything else in Solr that relies on 
commons-lang. Might be worth trying to remove and see what if anything breaks?

> Upgrade commons-lang to version 3.x
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Priority: Minor
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759121#comment-16759121
 ] 

Kevin Risden edited comment on SOLR-13074 at 2/2/19 5:27 PM:
-

The reproduce with from the description fails for me:
{code:java}
[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=MoveReplicaHDFSTest 
-Dtests.seed=DC1CE772C445A55D -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.awaitsfix=true -Dtests.badapples=true 
-Dtests.locale=fr -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
[junit4] ERROR 0.00s | MoveReplicaHDFSTest (suite) <<<
[junit4] > Throwable #1: java.lang.NullPointerException
[junit4] > at __randomizedtesting.SeedInfo.seed([DC1CE772C445A55D]:0)
[junit4] > at 
org.apache.solr.cloud.SolrCloudTestCase.zkClient(SolrCloudTestCase.java:227)
[junit4] > at 
org.apache.solr.cloud.MoveReplicaHDFSTest.setupClass(MoveReplicaHDFSTest.java:55)
[junit4] > at java.lang.Thread.run(Thread.java:748)
{code}


was (Author: risdenk):
The reproduce with from the top fails for me:

 
{code:java}
[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=MoveReplicaHDFSTest 
-Dtests.seed=DC1CE772C445A55D -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.awaitsfix=true -Dtests.badapples=true 
-Dtests.locale=fr -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
[junit4] ERROR 0.00s | MoveReplicaHDFSTest (suite) <<<
[junit4] > Throwable #1: java.lang.NullPointerException
[junit4] > at __randomizedtesting.SeedInfo.seed([DC1CE772C445A55D]:0)
[junit4] > at 
org.apache.solr.cloud.SolrCloudTestCase.zkClient(SolrCloudTestCase.java:227)
[junit4] > at 
org.apache.solr.cloud.MoveReplicaHDFSTest.setupClass(MoveReplicaHDFSTest.java:55)
[junit4] > at java.lang.Thread.run(Thread.java:748)
{code}
 

 

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13074) MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like crazy

2019-02-02 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759119#comment-16759119
 ] 

Kevin Risden commented on SOLR-13074:
-

Curious if these tests still fail with Hadoop 3 after SOLR-9515. I can take a 
look and see if there are failures.

> MoveReplicaHDFSTest leaks threads, falls into an endless loop, logging like 
> crazy
> -
>
> Key: SOLR-13074
> URL: https://issues.apache.org/jira/browse/SOLR-13074
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
>
> This reproduces for me, always (Linux box):
> {code}
> ant test  -Dtestcase=MoveReplicaHDFSTest -Dtests.seed=DC1CE772C445A55D 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=fr 
> -Dtests.timezone=Australia/Tasmania -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's the bug in Hadoop I discusse in SOLR-13060 -- one of the threads falls 
> into an endless loop when terminated (interrupted). Perhaps we should close 
> something cleanly and don't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >