[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7759 - Unstable!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7759/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest.stateVersionParamTest

Error Message:
Error from server at http://127.0.0.1:55431/solr/collection1: no servers 
hosting shard: shard2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55431/solr/collection1: no servers hosting 
shard: shard2
at 
__randomizedtesting.SeedInfo.seed([A7E9A29BE54B96D5:2D0AFE34E513C5C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest.stateVersionParamTest(CloudHttp2SolrClientTest.java:626)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2019-03-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788510#comment-16788510
 ] 

ASF subversion and git services commented on SOLR-12732:


Commit 8c6e30536562413639eaf8bab1087da700733b33 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8c6e305 ]

SOLR-12732: TestLogWatcher failure on Jenkins. Added more logging


> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2019-03-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788512#comment-16788512
 ] 

ASF subversion and git services commented on SOLR-12732:


Commit 83ab355772d46b755cab52e73051f395f938fd72 in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=83ab355 ]

SOLR-12732: TestLogWatcher failure on Jenkins. Added more logging

(cherry picked from commit 8c6e30536562413639eaf8bab1087da700733b33)


> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13312) write out responses without creating SolrDocument objects

2019-03-08 Thread Noble Paul (JIRA)
Noble Paul created SOLR-13312:
-

 Summary: write out responses without creating SolrDocument objects
 Key: SOLR-13312
 URL: https://issues.apache.org/jira/browse/SOLR-13312
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


Once we get a document from lucene there is no need to create a SolrDocument 
object to write out the response, if there are no transformers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13311) Make Solr memory efficient

2019-03-08 Thread Noble Paul (JIRA)
Noble Paul created SOLR-13311:
-

 Summary: Make Solr memory efficient
 Key: SOLR-13311
 URL: https://issues.apache.org/jira/browse/SOLR-13311
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


One of the biggest issues with Solr is its memory usage. It creates too many 
objects  to serve a request

Places where objects can be minimized 
* Writing out responses
* Cache entries
* distributed request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 23758 - Still Failing!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23758/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 8927 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/temp/junit4-J0-20190309_034152_3289857380284337547164.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] # To suppress the following error report, specify this argument
   [junit4] # after -XX: or in .hotspotrc:  
SuppressErrorAt=/loopPredicate.cpp:315
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error 
(/home/buildbot/worker/jdk12u-linux/build/src/hotspot/share/opto/loopPredicate.cpp:315),
 pid=20764, tid=20797
   [junit4] #  assert(dom_r->unique_ctrl_out()->is_Call()) failed: unc expected
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (12.0) (fastdebug build 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229, mixed 
mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x1241ab3]  
PhaseIdealLoop::clone_loop_predicates_fix_mem(ProjNode*, ProjNode*, 
PhaseIdealLoop*, PhaseIterGVN*)+0x123
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/J0/hs_err_pid20764.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/J0/replay_pid20764.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] Current thread is 20797
   [junit4] Dumping core ...
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/temp/junit4-J0-20190309_034152_3288491281769681701522.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp 
-- output truncated
   [junit4] <<< JVM J0: EOF 

[...truncated 34 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-12-ea+shipilev-fastdebug/bin/java 
-XX:+UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=B02AD893D1E474F8 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-master-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/J0
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/misc/test/temp
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dfile.encoding=UTF-8 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[JENKINS] Lucene-Solr-BadApples-NightlyTests-8.x - Build # 8 - Failure

2019-03-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/8/

13 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [MMapDirectory, 
MMapDirectory, SolrCore, InternalHttpClient, MMapDirectory, MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1193)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MMapDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:368)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:747)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1192)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1102)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:305)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at 

Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Shalin Shekhar Mangar
+1

SUCCESS! [1:21:54.453123]

On Fri, Mar 8, 2019 at 6:13 PM jim ferenczi  wrote:

> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>
> The artifacts can be downloaded from
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
> *
>
> Here's my +1
> SUCCESS! [1:19:23.500783]
>


-- 
Regards,
Shalin Shekhar Mangar.


[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+8) - Build # 240 - Unstable!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/240/
Java: 64bit/jdk-13-ea+8 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.update.TransactionLogTest:  
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at 
java.base@13-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@13-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:235)
 at 
java.base@13-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
app//com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
 at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)   
  at java.base@13-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.update.TransactionLogTest: 
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest]
at java.base@13-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@13-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:235)
at 
java.base@13-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
app//com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)
at java.base@13-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([C17644EB89D8253E]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=16, 
name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at 
java.base@13-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@13-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:235)
 at 
java.base@13-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
app//com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
 at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)   
  at java.base@13-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest]
at java.base@13-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@13-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:235)
at 
java.base@13-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
app//com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)
at java.base@13-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([C17644EB89D8253E]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.update.TransactionLogTest:  
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at 
java.base@13-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@13-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:235)
 at 

[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788442#comment-16788442
 ] 

Mark Miller commented on SOLR-13310:


I guess I'm still now sure how you would handle the lifecycle of the executor 
properly if it didn't live on the corecontainer, but I've given the background 
I have.

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-08 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788438#comment-16788438
 ] 

Shalin Shekhar Mangar commented on SOLR-13234:
--

Thanks Hoss.

[~danyal] - I am not sure either but a new PR will probably be easier to merge.

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788433#comment-16788433
 ] 

Mark Miller commented on SOLR-13300:


Or maybe not ... you still can't iterate on that without risk of concurrent 
modification exception.

> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13300.patch
>
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788437#comment-16788437
 ] 

Mark Miller commented on SOLR-13310:


Yeah, we could share a thread pool and executor from the core container and 
then anything with special needs could use their own executor more practically

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.0-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 249 - Unstable!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.0-Linux/249/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.security.TestPKIAuthenticationPlugin.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([77B083BCDE8E1409:FFE4BC66707279F1]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:712)
at org.junit.Assert.assertNotNull(Assert.java:722)
at 
org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:99)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 13244 lines...]
   [junit4] Suite: org.apache.solr.security.TestPKIAuthenticationPlugin
   [junit4]   2> 796248 INFO  
(SUITE-TestPKIAuthenticationPlugin-seed#[77B083BCDE8E1409]-worker) [] 
o.a.s.SolrTestCaseJ4 

[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788413#comment-16788413
 ] 

Hoss Man commented on SOLR-13310:
-

I'm saying ideally we would have a long lived, shared, re-used (unbounded) 
*_threadpool_* as part of the core container lifecycle, that each facet request 
could wrap in a per-request *_Executor_* that was bounded by the 
{{facet.threads=N}} limit.

that way we could let executor worry about not exceeding the 
{{facet.threads=N}} limit, instead of the wonky code SimpleFacets uses today 
that expects an unbounded Executor but uses has runnables that gate themselves 
on a semaphore to ensure that we don't use too many concurrent threads.

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788412#comment-16788412
 ] 

Mark Miller commented on SOLR-13300:


This isn't some kind of performance thing, perhaps we can just use 
Collections.synchronizedMap?

> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13300.patch
>
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-13300:
---

  Assignee: Hoss Man
Attachment: SOLR-13300.patch

bq. ... It is a good idea to make those base variables thread safe as a proper 
defense, but this skip stuff is a real problem.

Yeah, I'm sure that in general it's a good fix for most tests, it's just that 
in this particular case where we know there is only single threaded access to 
the handle, the lack of being able to use a null key is problematic

After thinking about it a few minutes i realized there's a much simpler 
solution: change the test from being smart enough to know that the 'null' key 
should be treated differently to making the test smart enough to not bother 
testing this permutation at all.

I'm attaching a patch that i'm currently beasting -- if no failures or 
objections i'll mover forward w/committing.




As far as the broader "meta" conversation goes about if/how to better deal with 
null keys in the {{handle}} ... AFAICT this is the only test who's handle usage 
miller had to modify when converting to ConcurrentHashMap, so i don't think we 
really need to worry about it.

(In general i'm not a fan of this kind of "run the same query against single 
node and cloud and test that it produces the same output" type queries anyway, 
because they are only useful if the exact same queries are independently tested 
for correctness -- ie: "did it even match any docs le alone the correct docs?" 
-- somewhere else.  We already have better patterns & base classes for writing 
cloud tests, we should just leave all this handle/SKIP/compare stuff behind and 
move on with good cloud tests)


> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13300.patch
>
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788407#comment-16788407
 ] 

Mark Miller commented on SOLR-13310:


bq. but there is really no reason why there needs to be a "long lived" 
facetExecutor

Although I don't know I agree with that when it comes to creating threads, 
unless you are saying you can do this without creating threads. Otherwise a 
thread pool and thread reuse seems like a good reason to me. When you do that 
consistently there are lots of nice benefits.

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788404#comment-16788404
 ] 

Mark Miller edited comment on SOLR-13310 at 3/9/19 12:25 AM:
-

bq. Maybe, but there is really no reason why there needs to be a "long lived" 
facetExecutor 

My comment is more general. Having noticed lots of kind of wasted executor 
usage, I added a executor to the base test class and corecontainre and moved 
all this little one off stuff to using those executors. We want them to help 
discourage silly executor usage (static executors we dont shutdown or wait for, 
tons of pools of threads without much reuse vs a few pools of threads) and give 
easy access to a pool of threads for random work.

In this case, if you don't need the long live static facet executor, that's 
fine.


was (Author: markrmil...@gmail.com):
bq. Maybe, but there is really no reason why there needs to be a "long lived" 
facetExecutor 

My comment is more general. Having noticed lots of kind of wasted executor 
usage, I added a executor to the base test class and corecontainre and moved 
all this little one off stuff to using those executors. We want them to help 
discourage silly executor usage and give easy access to a pool of threads for 
random work.

In this case, if you don't need the long live static facet executor, that's 
fine.

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788404#comment-16788404
 ] 

Mark Miller commented on SOLR-13310:


bq. Maybe, but there is really no reason why there needs to be a "long lived" 
facetExecutor 

My comment is more general. Having noticed lots of kind of wasted executor 
usage, I added a executor to the base test class and corecontainre and moved 
all this little one off stuff to using those executors. We want them to help 
discourage silly executor usage and give easy access to a pool of threads for 
random work.

In this case, if you don't need the long live static facet executor, that's 
fine.

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788401#comment-16788401
 ] 

Mark Miller edited comment on SOLR-13300 at 3/9/19 12:15 AM:
-

bq. i'm not sure why exactly that was a neccessary change given that 
ConcurrentHashMap will already "protect itself" from having a null key

It's been a while, but I assume that was done to work around how 
ConcurrentHashMap was handling a null key.

If I didn't see realfails with this I would have walked it back, I know that. 
It was an annoying change for something that should be so trivial.



was (Author: markrmil...@gmail.com):
bq. i'm not sure why exactly that was a neccessary change given that 
ConcurrentHashMap will already "protect itself" from having a null key

It's been a while, but I assume that was done to work around how 
ConcurrentHashMap was handling a null key.

If I didn't see rail fails with this I would have walked it back, I know that. 
It was an annoying change for something that should be so trivial.


> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788401#comment-16788401
 ] 

Mark Miller commented on SOLR-13300:


bq. i'm not sure why exactly that was a neccessary change given that 
ConcurrentHashMap will already "protect itself" from having a null key

It's been a while, but I assume that was done to work around how 
ConcurrentHashMap was handling a null key.

If I didn't see rail fails with this I would have walked it back, I know that. 
It was an annoying change for something that should be so trivial.


> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788399#comment-16788399
 ] 

Mark Miller commented on SOLR-13300:


{noformat} HashMap to a ConcurrentHashMap (which doesn't support null keys) ... 
although I'm not sure if that was something that he found was definitively 
causing problems in other tests, or just something he thought was a "good idea" 
and tried it.{noformat}

That change ended up being annoying, it was not for the hell of it. Tests often 
don't respect shared base variables when they are not thread safe, timing 
changes can expose those problems. It is a good idea to make those base 
variables thread safe as a proper defense, but this skip stuff is a real 
problem.

> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788385#comment-16788385
 ] 

Hoss Man edited comment on SOLR-13310 at 3/9/19 12:07 AM:
--

This was evidently introduced as part of [~markrmil...@gmail.com]'s work in 
SOLR-12801 – I'm guessing because prior to this the executor wasn't being 
shutdown cleanly? ... but this still seems like a bug.

best case scenerio it's going to be confusing a hell for anyone looking at 
thread dumps, worst case scenerio this (may) actaully causes some bugs because 
of how the UpdateShardHandler is treated special in terms of error handling.

 
{noformat}
@@ -170,6 +166,7 @@ public class SimpleFacets {
 this.docsOrig = docs;
 this.global = params;
 this.rb = rb;
+this.facetExecutor = 
req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
   }
 
   public void setFacetDebugInfo(FacetDebugInfo fdebugParent) {
@@ -773,13 +770,7 @@ public class SimpleFacets {
 }
   };
 
-  static final Executor facetExecutor = new 
ExecutorUtil.MDCAwareThreadPoolExecutor(
-  0,
-  Integer.MAX_VALUE,
-  10, TimeUnit.SECONDS, // terminate idle threads after 10 sec
-  new SynchronousQueue()  // directly hand off tasks
-  , new DefaultSolrThreadFactory("facetExecutor")
-  );
+  private final Executor facetExecutor;
{noformat}

(EDIT: added "may" regarding the potential for bugs here ... i didn't mean to 
imply that i know for a fact it was causing bugs)


was (Author: hossman):
This was evidently introduced as part of [~markrmil...@gmail.com]'s work in 
SOLR-12801 – I'm guessing because prior to this the executor wasn't being 
shutdown cleanly? ... but this still seems like a bug.

best case scenerio it's going to be confusing a hell for anyone looking at 
thread dumps, worst case scenerio this actaully causes some bugs because of how 
the UpdateShardHandler is treated special in terms of error handling.

 
{noformat}
@@ -170,6 +166,7 @@ public class SimpleFacets {
 this.docsOrig = docs;
 this.global = params;
 this.rb = rb;
+this.facetExecutor = 
req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
   }
 
   public void setFacetDebugInfo(FacetDebugInfo fdebugParent) {
@@ -773,13 +770,7 @@ public class SimpleFacets {
 }
   };
 
-  static final Executor facetExecutor = new 
ExecutorUtil.MDCAwareThreadPoolExecutor(
-  0,
-  Integer.MAX_VALUE,
-  10, TimeUnit.SECONDS, // terminate idle threads after 10 sec
-  new SynchronousQueue()  // directly hand off tasks
-  , new DefaultSolrThreadFactory("facetExecutor")
-  );
+  private final Executor facetExecutor;
{noformat}

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788398#comment-16788398
 ] 

Hoss Man commented on SOLR-13310:
-

{quote}No clue how a thread pool executor is treated special due to error 
handling.
{quote}
I was thinking of this comment regarding SOLR-11880 – but i guess it would just 
complicate/confuse stacktraces in the event of exceptions...

{code}
  private ExecutorService updateExecutor = new 
ExecutorUtil.MDCAwareThreadPoolExecutor(0, Integer.MAX_VALUE,
  60L, TimeUnit.SECONDS,
  new SynchronousQueue<>(),
  new SolrjNamedThreadFactory("updateExecutor"),
  // the Runnable added to this executor handles all exceptions so we 
disable stack trace collection as an optimization
  // see SOLR-11880 for more details
  false);
{code}

bq. Executors like this should live on the core container ...

Maybe, but there is really no reason why there needs to be a "long lived" 
facetExecutor ... the most straight forward way for {{facet.threads=N}} to work 
would be to create an Executor of size N on demand ... it's really just a 
shared _threadpool_ that we should re-use to prevent lots of short lived 
threads.

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13300) Reproducing seed in master for DistributedFacetExistsSmallTest

2019-03-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788392#comment-16788392
 ] 

Hoss Man commented on SOLR-13300:
-

Ok, so here's what's happening here..
 # When [~mkhludnev] first wrote this test as part of SOLR-5725, he explicitly 
noted in the test that there was in fact a discrepency in how 
{{facet.missing=true=0}} was handled between single node solr and 
distributed solr, and had the test account for this by using {{handle.put(null, 
SKIP);}} (because the facet.missing count is returned in the "facet_fields" 
response as a null key.

 # In Nov 2018, as part of his SOLR-12801 work, [~markrmil...@gmail.com]'s 
75b183196798232aa6f2dcb117f309119053 commit changed this test to just 
remove the {{handle.put(null, SKIP);}} call – evidently because he felt it was 
important to change {{BaseDistributedSearchTestCase.handle}} from a HashMap to 
a ConcurrentHashMap (which doesn't support null keys) ... although I'm not sure 
if that was something that he found was definitively causing problems in other 
tests, or just something he thought was a "good idea" and tried it.

Either way: The net result is that since then this test is no longer protecting 
itself against the possible random permutation of params that is known to be 
inconsistent and cause test failures.

Just attempting to re-instate the {{handle.put(null, SKIP);}} causes an NPE 
because of the use of ConcurrentHashMap.

My first thought was that the most straight forward fix would be for this 
particular test to override {{protected Map handle}} to go back to using a 
simple HashMap (since the test doesn't involve any concurrency) – but when I 
tried that the test still failed because Mark's 75b1831967 commit also 
redefined {{BaseDistributedSearchTestCase.flags}} to short circut on 'null' so 
it never even consults the {{handle}} ... i'm not sure why exactly that was a 
neccessary change given that ConcurrentHashMap will already "protect itself" 
from having a null key ... perhaps that can be rolled back?

> Reproducing seed in master for DistributedFacetExistsSmallTest
> --
>
> Key: SOLR-13300
> URL: https://issues.apache.org/jira/browse/SOLR-13300
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> ant test -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> This test hasn't been changed lately, but reproduces 3/3 for me.
>  
> {code:java}
>2> NOTE: reproduce with: ant test  
> -Dtestcase=DistributedFacetExistsSmallTest -Dtests.method=test 
> -Dtests.seed=662700D3AC576EAB -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ca -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 19.1s | DistributedFacetExistsSmallTest.test <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> .facet_counts.facet_fields.t_s.null:0!=null
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([662700D3AC576EAB:EE733F0902AB0353]:0)
>[junit4]>  at junit.framework.Assert.fail(Assert.java:57)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:999)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:1026)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:680)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:643)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.checkRandomParams(DistributedFacetExistsSmallTest.java:139)
>[junit4]>  at 
> org.apache.solr.handler.component.DistributedFacetExistsSmallTest.test(DistributedFacetExistsSmallTest.java:87)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /Users/Erick/apache/solrVersions/playspace/solr/build/solr-core/test/J0/temp/solr.handler.component.DistributedFacetExistsSmallTest_662700D3AC576EAB-001
> {code}




[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788385#comment-16788385
 ] 

Hoss Man commented on SOLR-13310:
-

This was evidently introduced as part of [~markrmil...@gmail.com]'s work in 
SOLR-12801 – I'm guessing because prior to this the executor wasn't being 
shutdown cleanly? ... but this still seems like a bug.

best case scenerio it's going to be confusing a hell for anyone looking at 
thread dumps, worst case scenerio this actaully causes some bugs because of how 
the UpdateShardHandler is treated special in terms of error handling.

 
{noformat}
@@ -170,6 +166,7 @@ public class SimpleFacets {
 this.docsOrig = docs;
 this.global = params;
 this.rb = rb;
+this.facetExecutor = 
req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
   }
 
   public void setFacetDebugInfo(FacetDebugInfo fdebugParent) {
@@ -773,13 +770,7 @@ public class SimpleFacets {
 }
   };
 
-  static final Executor facetExecutor = new 
ExecutorUtil.MDCAwareThreadPoolExecutor(
-  0,
-  Integer.MAX_VALUE,
-  10, TimeUnit.SECONDS, // terminate idle threads after 10 sec
-  new SynchronousQueue()  // directly hand off tasks
-  , new DefaultSolrThreadFactory("facetExecutor")
-  );
+  private final Executor facetExecutor;
{noformat}

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788391#comment-16788391
 ] 

Mark Miller commented on SOLR-13310:


I got rid of the ugly exceptions that allowed static executors like this to 
live because those exceptions hid all sorts of things (things like allowing 30 
seconds for threads to die and such).

No clue how a thread pool executor is treated special due to error handling.

Executors like this should live on the core container - I think I made a 
general use executor like that for the starburst branch, but here I just 
grabbed what is called the update executor but is actually more accurately the 
non http shard handler misc executor as currently used.


> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788388#comment-16788388
 ] 

Hoss Man commented on SOLR-13310:
-

(the most straightforward fix is probably for the facet code ot instantiate a 
new Executor on each request that uses {{facet.threads=N}} bounded by N

(its never really made much sense to me that it the facet.threads used an 
infinitely sized Executor with calling code that carefully made sure to never 
submit more then N tasks at a time)

> facet.threads is using the updateExecutor
> -
>
> Key: SOLR-13310
> URL: https://issues.apache.org/jira/browse/SOLR-13310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
> this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13310) facet.threads is using the updateExecutor

2019-03-08 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13310:
---

 Summary: facet.threads is using the updateExecutor
 Key: SOLR-13310
 URL: https://issues.apache.org/jira/browse/SOLR-13310
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


Had a WTF moment skimming some SimpleFacets code today...

{code}
this.facetExecutor = 
req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-03-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788349#comment-16788349
 ] 

Jan Høydahl commented on SOLR-12584:


[~tulipbill] can you file a separate Jira issue for the ZK ACL issue? I suppose 
we should borrow code from Solr and support {{SOLR_ZK_CREDS_AND_ACLS}} env var 
just as bin/solr does.

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 23757 - Failure!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23757/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 7919 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp/junit4-J0-20190308_221420_1975441016452137319176.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] # To suppress the following error report, specify this argument
   [junit4] # after -XX: or in .hotspotrc:  
SuppressErrorAt=/loopPredicate.cpp:315
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error 
(/home/buildbot/worker/jdk12u-linux/build/src/hotspot/share/opto/loopPredicate.cpp:315),
 pid=28015, tid=28047
   [junit4] #  assert(dom_r->unique_ctrl_out()->is_Call()) failed: unc expected
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (12.0) (fastdebug build 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229, mixed 
mode, sharing, tiered, compressed oops, parallel gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x1241ab3]  
PhaseIdealLoop::clone_loop_predicates_fix_mem(ProjNode*, ProjNode*, 
PhaseIdealLoop*, PhaseIterGVN*)+0x123
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J0/hs_err_pid28015.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J0/replay_pid28015.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] Current thread is 28047
   [junit4] Dumping core ...
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp/junit4-J0-20190308_221420_19713163305861016941459.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp 
-- output truncated
   [junit4] <<< JVM J0: EOF 

[...truncated 57 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-12-ea+shipilev-fastdebug/bin/java 
-XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=E06BB2C2770A5D7B 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-master-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J0
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dfile.encoding=UTF-8 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[GitHub] [lucene-solr] saschaszott opened a new pull request #602: docu change: use class TopDocs instead of Hits

2019-03-08 Thread GitBox
saschaszott opened a new pull request #602: docu change: use class TopDocs 
instead of Hits
URL: https://github.com/apache/lucene-solr/pull/602
 
 
   `Hits` was removed in Lucene 3.0


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Anshum Gupta
+1

Downloaded and tried standard indexing/search and everything worked as
expected.
Changes look good.

SUCCESS! [1:15:03.494406]

On Fri, Mar 8, 2019 at 4:43 AM jim ferenczi  wrote:

> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>
> The artifacts can be downloaded from
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
> *
>
> Here's my +1
> SUCCESS! [1:19:23.500783]
>


-- 
Anshum Gupta


[jira] [Commented] (LUCENE-8671) Add setting for moving FST offheap/onheap

2019-03-08 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788340#comment-16788340
 ] 

Ankit Jain commented on LUCENE-8671:


[~simonw] [~mikemccand] I have created PR - 
https://github.com/apache/lucene-solr/pull/601 with the code change for having 
reader attributes in the index writer config. Please take a look.

> Add setting for moving FST offheap/onheap
> -
>
> Key: LUCENE-8671
> URL: https://issues.apache.org/jira/browse/LUCENE-8671
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs, core/store
>Reporter: Ankit Jain
>Priority: Minor
> Attachments: offheap_generic_settings.patch, offheap_settings.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> While LUCENE-8635, adds support for loading FST offheap using mmap, users do 
> not have the  flexibility to specify fields for which FST needs to be 
> offheap. This allows users to tune heap usage as per their workload.
> Ideal way will be to add an attribute to FieldInfo, where we have 
> put/getAttribute. Then FieldReader can inspect the FieldInfo and pass the 
> appropriate On/OffHeapStore when creating its FST. It can support special 
> keywords like ALL/NONE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jainankitk opened a new pull request #601: Adding reader settings for moving fst offheap

2019-03-08 Thread GitBox
jainankitk opened a new pull request #601: Adding reader settings for moving 
fst offheap
URL: https://github.com/apache/lucene-solr/pull/601
 
 
   This change adds setting for moving fst offheap, specifically address 
[JIRA-8671](https://issues.apache.org/jira/browse/LUCENE-8671)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6741) IPv6 Field Type

2019-03-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788318#comment-16788318
 ] 

David Smiley commented on SOLR-6741:


Any progress on this Dale?  If you've given up but made partial progress, you 
can share your progress.  I was thinking this would be a nice GSOC project 
(along with some related issues), so I labelled it as such.

> IPv6 Field Type
> ---
>
> Key: SOLR-6741
> URL: https://issues.apache.org/jira/browse/SOLR-6741
> Project: Solr
>  Issue Type: Improvement
>Reporter: Lloyd Ramey
>Priority: Major
>  Labels: gsoc2019, mentor
> Attachments: SOLR-6741.patch
>
>
> It would be nice if Solr had a field type which could be used to index IPv6 
> data and supported efficient range queries. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788311#comment-16788311
 ] 

Jan Høydahl commented on SOLR-13244:


PR is ready. Will merge on Tuesday if no better ideas on styling or other 
things :)

This PR also fixes a bug introduced in SOLR-12121 (unreleased) where 
replica/core links were broken from Node screen.

> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
> Attachments: solr13244.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12121) JWT Authentication plugin

2019-03-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788306#comment-16788306
 ] 

ASF subversion and git services commented on SOLR-12121:


Commit 4dfe945ae7e18ee93c7bb93c490446f23d544d1c in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4dfe945 ]

SOLR-12121: Move CHANGES entry from Improvements to New Features section

(cherry picked from commit 9eabaf46a2be77383643dada10e93087970c07f7)


> JWT Authentication plugin
> -
>
> Key: SOLR-12121
> URL: https://issues.apache.org/jira/browse/SOLR-12121
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.x, master (9.0)
>
> Attachments: image-2018-08-27-13-04-04-183.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A new Authentication plugin that will accept a [Json Web 
> Token|https://en.wikipedia.org/wiki/JSON_Web_Token] (JWT) in the 
> Authorization header and validate it by checking the cryptographic signature. 
> The plugin will not perform the authentication itself but assert that the 
> user was authenticated by the service that issued the JWT token.
> JWT defined a number of standard claims, and user principal can be fetched 
> from the {{sub}} (subject) claim and passed on to Solr. The plugin will 
> always check the {{exp}} (expiry) claim and optionally enforce checks on the 
> {{iss}} (issuer) and {{aud}} (audience) claims.
> The first version of the plugin will only support RSA signing keys and will 
> support fetching the public key of the issuer through a [Json Web 
> Key|https://tools.ietf.org/html/rfc7517] (JWK) file, either from a https URL 
> or from local file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12121) JWT Authentication plugin

2019-03-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788304#comment-16788304
 ] 

ASF subversion and git services commented on SOLR-12121:


Commit 9eabaf46a2be77383643dada10e93087970c07f7 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9eabaf4 ]

SOLR-12121: Move CHANGES entry from Improvements to New Features section


> JWT Authentication plugin
> -
>
> Key: SOLR-12121
> URL: https://issues.apache.org/jira/browse/SOLR-12121
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.x, master (9.0)
>
> Attachments: image-2018-08-27-13-04-04-183.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A new Authentication plugin that will accept a [Json Web 
> Token|https://en.wikipedia.org/wiki/JSON_Web_Token] (JWT) in the 
> Authorization header and validate it by checking the cryptographic signature. 
> The plugin will not perform the authentication itself but assert that the 
> user was authenticated by the service that issued the JWT token.
> JWT defined a number of standard claims, and user principal can be fetched 
> from the {{sub}} (subject) claim and passed on to Solr. The plugin will 
> always check the {{exp}} (expiry) claim and optionally enforce checks on the 
> {{iss}} (issuer) and {{aud}} (audience) claims.
> The first version of the plugin will only support RSA signing keys and will 
> support fetching the public key of the issuer through a [Json Web 
> Key|https://tools.ietf.org/html/rfc7517] (JWK) file, either from a https URL 
> or from local file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13309) Add FieldTypes for Lucene IntRange, LongRange, FloatRange, DoubleRange

2019-03-08 Thread David Smiley (JIRA)
David Smiley created SOLR-13309:
---

 Summary: Add FieldTypes for Lucene IntRange, LongRange, 
FloatRange, DoubleRange
 Key: SOLR-13309
 URL: https://issues.apache.org/jira/browse/SOLR-13309
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Schema and Analysis
Reporter: David Smiley


This issue is for adding Solr FieldType subclasses that expose the 
functionality implemented by the Lucene classes: IntRange, LongRange, 
FloatRange, DoubleRange.

An attribute on the FieldType could specify the dimensionality 1-4.

We'll need to come up with a syntax to specify the predicate: Intersects, 
Contains, or Within.

Credit to [~nknize] for implementing these cool classes at the Lucene level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 67 - Unstable!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/67/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1193)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:517)  
at org.apache.solr.core.SolrCore.(SolrCore.java:968)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1192)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1102)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:368)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:747)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1192)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1102)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
 

[jira] [Updated] (SOLR-7489) Don't wait as long to try and recover hdfs leases on transaction log files.

2019-03-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-7489:
---
Fix Version/s: (was: 6.0)
   (was: 5.2)

> Don't wait as long to try and recover hdfs leases on transaction log files.
> ---
>
> Key: SOLR-7489
> URL: https://issues.apache.org/jira/browse/SOLR-7489
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> We initially just took most of this code from hbase which will wait for up to 
> 15 minutes. This doesn't seem ideal - we should give up sooner and treat the 
> file as not recoverable.
> We also need to fix the possible data loss message. This is really the same 
> as if a transaction log on local disk were to become corrupt, and if you have 
> a replica to recover from, things will be fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2019-03-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12458:

Fix Version/s: (was: 8.0)

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 8.0
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2019-03-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-12458:

Affects Version/s: (was: 8.0)

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9111) HdfsCollectionsAPIDistributedZkTest stalled for over 75 hours

2019-03-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788263#comment-16788263
 ] 

Kevin Risden commented on SOLR-9111:


This doesn't happen anymore. The suite timeout killed it.

Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1786/

2 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([A23D00C6F401CC94]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest

Error Message:
Suite timeout exceeded (>= 360 msec).

> HdfsCollectionsAPIDistributedZkTest stalled for over 75 hours
> -
>
> Key: SOLR-9111
> URL: https://issues.apache.org/jira/browse/SOLR-9111
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> In [https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/63], 
> HdfsCollectionsAPIDistributedZKTest stalled for 272347s - over 75 hours - 
> before I killed the job.
> I think it should have a timeout, or maybe nightly tests should have a global 
> timeout?
> Except for the monster tests, it seems to me that 24 hours is a reasonable 
> limit on the full suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] danmuzi commented on issue #585: LUCENE-8698: Fix replaceIgnoreCase method bug in EscapeQuerySyntaxImpl

2019-03-08 Thread GitBox
danmuzi commented on issue #585: LUCENE-8698: Fix replaceIgnoreCase method bug 
in EscapeQuerySyntaxImpl
URL: https://github.com/apache/lucene-solr/pull/585#issuecomment-471064954
 
 
   Hi, @romseygeek. 
   I'm sorry for bothering you.
   
   This PR is a patch for the 
[LUCENE-8572](https://issues.apache.org/jira/browse/LUCENE-8572) issue.
   I've been waiting for two weeks, but have not received a review yet.
   Personally, I think it could be a pretty important issue.
   
   If you have time, could you please check this one?
   Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9111) HdfsCollectionsAPIDistributedZkTest stalled for over 75 hours

2019-03-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-9111.

Resolution: Fixed

SOLR-13060 fixed the test timeout so that the test eventually dies. Didn't fix 
the underlying issue.

> HdfsCollectionsAPIDistributedZkTest stalled for over 75 hours
> -
>
> Key: SOLR-9111
> URL: https://issues.apache.org/jira/browse/SOLR-9111
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> In [https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/63], 
> HdfsCollectionsAPIDistributedZKTest stalled for 272347s - over 75 hours - 
> before I killed the job.
> I think it should have a timeout, or maybe nightly tests should have a global 
> timeout?
> Except for the monster tests, it seems to me that 24 hours is a reasonable 
> limit on the full suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9111) HdfsCollectionsAPIDistributedZkTest stalled for over 75 hours

2019-03-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9111:
---
Component/s: hdfs
 Hadoop Integration

> HdfsCollectionsAPIDistributedZkTest stalled for over 75 hours
> -
>
> Key: SOLR-9111
> URL: https://issues.apache.org/jira/browse/SOLR-9111
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> In [https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/63], 
> HdfsCollectionsAPIDistributedZKTest stalled for 272347s - over 75 hours - 
> before I killed the job.
> I think it should have a timeout, or maybe nightly tests should have a global 
> timeout?
> Except for the monster tests, it seems to me that 24 hours is a reasonable 
> limit on the full suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6237) An option to have only leaders write and replicas read when using a shared file system with SolrCloud.

2019-03-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788245#comment-16788245
 ] 

David Smiley commented on SOLR-6237:


I believe we'll see something this year from others I work with, though I 
promise nothing in saying this.

Just curious Peter; if this feature were to exist, would you want it for HDFS 
specifically, or simply whatever worked?

> An option to have only leaders write and replicas read when using a shared 
> file system with SolrCloud.
> --
>
> Key: SOLR-6237
> URL: https://issues.apache.org/jira/browse/SOLR-6237
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs, SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: 0001-unified.patch, SOLR-6237.patch, Unified Replication 
> Design.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-08 Thread Danyal Prout (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788237#comment-16788237
 ] 

Danyal Prout commented on SOLR-13234:
-

Thanks for finding this Hoss. My bad, I've just looked at the test and the 
intent is to see if there are any Solr JVM metrics for each node being reported 
by the exporter. However it specifically looks for the count of terminated 
threads, I suspect in this case no threads have currently been terminated.

It should be a pretty simple update to fix this test. I can send a PR 
tonight/tomorrow morning to address this. However I'm not sure if it's best to 
open a new PR or add an additional commit to the original one.

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Furkan KAMACI
Hi,

+1!

Kind Regards,
Furkan KAMACI

On Fri, Mar 8, 2019 at 10:08 PM Tomás Fernández Löbbe 
wrote:

> SUCCESS! [1:13:11.720840]
>
> +1
>
> Thanks Jim!
>
> On Fri, Mar 8, 2019 at 9:38 AM Uwe Schindler  wrote:
>
>> Hi,
>>
>>
>>
>> I did the same tests like last time. Here the results:
>>
>>
>>
>> https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/18/console
>>
>> SUCCESS! [2:36:19.284235]
>>
>>
>>
>> The testing was done with Java 8 and Java 9 (this is why it took longer).
>>
>>
>>
>> I also checked Changes.txt, looks fine!
>>
>>
>>
>> I also checked the ZIP files of Lucene and Solr. Lucene looks as usual,
>> MIGRATE.txt is also fine – thanks for adding recent information! JAR files
>> also look fine, compiled with correct version of Java and patches
>> multi-release class files are there.
>>
>>
>>
>> Apache Solr was unzipped and quickly tested on Windows: Startup with Java
>> 8 and Java 11 worked without any problems from a directory with whitespace
>> in path. I was able to do a HTTP/2 request with CURL (non-TLS) on both O/S
>> (as ALPN is not available without TLS, the curl client needed to upgrade
>> the request, so you see “101 Switching Protocols”):
>>
>>
>>
>> *Uwe Schindler@VEGA:*~ > curl -I --http2 localhost:8983/solr/
>>
>> HTTP/1.1 101 Switching Protocols
>>
>>
>>
>> HTTP/2 200
>>
>> x-frame-options: DENY
>>
>> content-type: text/html;charset=utf-8
>>
>> content-length: 14662
>>
>>
>>
>> So, it worked. In the webbrowser it did not use HTTP/2, as browsers
>> require SSL and ALPN for that (by default).
>>
>>
>>
>> Next I enabled TLS support by creating a keystore. This time it was
>> possible to start Solr on Java 8 and Java 11 – bug fixed. With Solr running
>> in Java 8, CURL was only able to do a HTTP/1.1 request because ALPN told
>> this to the curl client:
>>
>>
>>
>> *Uwe Schindler@VEGA:*~ > curl -k -I --http2 https://localhost:8983/solr/
>>
>> HTTP/1.1 200 OK
>>
>> X-Frame-Options: DENY
>>
>> Content-Type: text/html;charset=utf-8
>>
>> Content-Length: 14662
>>
>>
>>
>> With Java 11, my curl test worked with HTTP/2, this time no protocol
>> switch needed as ALPN is active:
>>
>>
>>
>> *Uwe Schindler@VEGA:*~ > curl -k -I --http2 https://localhost:8983/solr/
>>
>> HTTP/2 200
>>
>> x-frame-options: DENY
>>
>> content-type: text/html;charset=utf-8
>>
>> content-length: 14662
>>
>>
>>
>> I also checked in Chrome browser, this time it uses HTTP/2 to communicate
>> with the admin interface. Fine!
>>
>>
>>
>> *+1 to release Lucene/Solr 8.0.0 (RC4).*
>>
>>
>>
>> Uwe
>>
>>
>>
>> -
>>
>> Uwe Schindler
>>
>> Achterdiek 19, D-28357 Bremen
>>
>> http://www.thetaphi.de
>>
>> eMail: u...@thetaphi.de
>>
>>
>>
>> *From:* jim ferenczi 
>> *Sent:* Friday, March 8, 2019 1:43 PM
>> *To:* dev@lucene.apache.org
>> *Subject:* [VOTE] Release Lucene/Solr 8.0.0 RC4
>>
>>
>>
>> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>>
>> The artifacts can be downloaded from
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
>> You can run the smoke tester directly with this command:
>> python3 -u dev-tools/scripts/smokeTestRelease.py 
>> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
>> *
>>
>>
>>
>> Here's my +1
>>
>> SUCCESS! [1:19:23.500783]
>>
>


[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2019-03-08 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788217#comment-16788217
 ] 

Erick Erickson commented on SOLR-12732:
---

Oh, and I made a stupid error in the test, each loop runs for 10 seconds no 
matter what. I'll fix that. Don't really think it has anything to do with the 
test failures though.

I'm beasting it and loading the machines I'm running the tests on quite 
heavily, but if past performance is any indication I won't be able to get it to 
fail anyway. If that's the case there's some additional debugging info I'll put 
put in.

> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9698) Fix bin/solr script calculations - start/stop wait time and RMI_PORT on Windows

2019-03-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788202#comment-16788202
 ] 

Shawn Heisey commented on SOLR-9698:


Oh, the patch just submitted has an echo (for debugging) that needs to be 
removed before it is committed.

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT on 
> Windows
> ---
>
> Key: SOLR-9698
> URL: https://issues.apache.org/jira/browse/SOLR-9698
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Erick Erickson
>Priority: Major
> Attachments: SOLR-9698.patch
>
>
> Killing the Solr process after 5 seconds is too harsh. See the discussion at 
> SOLR-9371.
> SOLR-9371 fixes the *nix versions, we need to do something similar for 
> Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Tomás Fernández Löbbe
SUCCESS! [1:13:11.720840]

+1

Thanks Jim!

On Fri, Mar 8, 2019 at 9:38 AM Uwe Schindler  wrote:

> Hi,
>
>
>
> I did the same tests like last time. Here the results:
>
>
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/18/console
>
> SUCCESS! [2:36:19.284235]
>
>
>
> The testing was done with Java 8 and Java 9 (this is why it took longer).
>
>
>
> I also checked Changes.txt, looks fine!
>
>
>
> I also checked the ZIP files of Lucene and Solr. Lucene looks as usual,
> MIGRATE.txt is also fine – thanks for adding recent information! JAR files
> also look fine, compiled with correct version of Java and patches
> multi-release class files are there.
>
>
>
> Apache Solr was unzipped and quickly tested on Windows: Startup with Java
> 8 and Java 11 worked without any problems from a directory with whitespace
> in path. I was able to do a HTTP/2 request with CURL (non-TLS) on both O/S
> (as ALPN is not available without TLS, the curl client needed to upgrade
> the request, so you see “101 Switching Protocols”):
>
>
>
> *Uwe Schindler@VEGA:*~ > curl -I --http2 localhost:8983/solr/
>
> HTTP/1.1 101 Switching Protocols
>
>
>
> HTTP/2 200
>
> x-frame-options: DENY
>
> content-type: text/html;charset=utf-8
>
> content-length: 14662
>
>
>
> So, it worked. In the webbrowser it did not use HTTP/2, as browsers
> require SSL and ALPN for that (by default).
>
>
>
> Next I enabled TLS support by creating a keystore. This time it was
> possible to start Solr on Java 8 and Java 11 – bug fixed. With Solr running
> in Java 8, CURL was only able to do a HTTP/1.1 request because ALPN told
> this to the curl client:
>
>
>
> *Uwe Schindler@VEGA:*~ > curl -k -I --http2 https://localhost:8983/solr/
>
> HTTP/1.1 200 OK
>
> X-Frame-Options: DENY
>
> Content-Type: text/html;charset=utf-8
>
> Content-Length: 14662
>
>
>
> With Java 11, my curl test worked with HTTP/2, this time no protocol
> switch needed as ALPN is active:
>
>
>
> *Uwe Schindler@VEGA:*~ > curl -k -I --http2 https://localhost:8983/solr/
>
> HTTP/2 200
>
> x-frame-options: DENY
>
> content-type: text/html;charset=utf-8
>
> content-length: 14662
>
>
>
> I also checked in Chrome browser, this time it uses HTTP/2 to communicate
> with the admin interface. Fine!
>
>
>
> *+1 to release Lucene/Solr 8.0.0 (RC4).*
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* jim ferenczi 
> *Sent:* Friday, March 8, 2019 1:43 PM
> *To:* dev@lucene.apache.org
> *Subject:* [VOTE] Release Lucene/Solr 8.0.0 RC4
>
>
>
> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>
> The artifacts can be downloaded from
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
> *
>
>
>
> Here's my +1
>
> SUCCESS! [1:19:23.500783]
>


[jira] [Commented] (SOLR-9698) Fix bin/solr script calculations - start/stop wait time and RMI_PORT on Windows

2019-03-08 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788174#comment-16788174
 ] 

Shawn Heisey commented on SOLR-9698:


Something that would be immensely useful for testing this sort of thing would 
be a URL endpoint that creates a thread deadlock within Solr, so that it will 
never gracefully stop.

> Fix bin/solr script calculations - start/stop wait time and RMI_PORT on 
> Windows
> ---
>
> Key: SOLR-9698
> URL: https://issues.apache.org/jira/browse/SOLR-9698
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Erick Erickson
>Priority: Major
>
> Killing the Solr process after 5 seconds is too harsh. See the discussion at 
> SOLR-9371.
> SOLR-9371 fixes the *nix versions, we need to do something similar for 
> Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-08 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-13234:
-

Shalin: several jenkins failures from SolrExporterIntegrationTest.jvmMetrics in 
the few days since you've committed this, including this seed that reproduces 
reliably for me on master...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=SolrExporterIntegrationTest -Dtests.method=jvmMetrics 
-Dtests.seed=D0408796D2DB58EC -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=tr-TR 
-Dtests.timezone=America/Argentina/Mendoza -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 3.92s | SolrExporterIntegrationTest.jvmMetrics <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<4> but 
was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([D0408796D2DB58EC:3F4BA79C7359478]:0)
   [junit4]>at 
org.apache.solr.prometheus.exporter.SolrExporterIntegrationTest.jvmMetrics(SolrExporterIntegrationTest.java:68)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

and this seed which was reported by 8.x jenkins but also reproduces on master...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=SolrExporterIntegrationTest -Dtests.method=jvmMetrics 
-Dtests.seed=62880F3B9F140C89 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=tr-TR -Dtests.timezone=Asia/Novokuznetsk -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 3.40s | SolrExporterIntegrationTest.jvmMetrics <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<4> but 
was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([62880F3B9F140C89:B13C32D48AFAC01D]:0)
   [junit4]>at 
org.apache.solr.prometheus.exporter.SolrExporterIntegrationTest.jvmMetrics(SolrExporterIntegrationTest.java:68)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging

2019-03-08 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788145#comment-16788145
 ] 

Erick Erickson commented on SOLR-13268:
---

[~cpoerschke] Can you hop in the way back machine and take a look at  
TestXmlQParser? It was created as part of SOLR-830 according to Git and my 
questions are:

1> does it really do anything? Looks like a stub.

2> Can I remove it?

3> If not <2>, does it belong in Solr tests or Lucene tests? AFAIK, it doesn't 
look like it was intended to do anything solr-ish.

It relates to this JIRA, I have a reproducible seed on this test class. I can 
fix it without too much trouble, but when I looked at this I wondered if it 
shouldn't just be removed.

> Clean up any test failures resulting from defaulting to async logging
> -
>
> Key: SOLR-13268
> URL: https://issues.apache.org/jira/browse/SOLR-13268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13268.patch, SOLR-13268.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a catch-all for test failures due to the async logging changes. So 
> far, the I see a couple failures on JDK13 only. I'll collect a "starter set" 
> here, these are likely systemic, once the root cause is found/fixed, then 
> others are likely fixed as well.
> JDK13:
> ant test  -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV 
> -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> ant test  -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk 
> -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1786 - Failure

2019-03-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1786/

2 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([A23D00C6F401CC94]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest

Error Message:
Suite timeout exceeded (>= 360 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 360 msec).
at __randomizedtesting.SeedInfo.seed([A23D00C6F401CC94]:0)




Build Log:
[...truncated 14385 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest_A23D00C6F401CC94-001/init-core-data-001
   [junit4]   2> 395384 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 395385 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 395385 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 396158 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.h.u.NativeCodeLoader Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 397732 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 398052 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 398096 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: 
c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12
   [junit4]   2> 398102 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 398102 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 398102 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.session node0 Scavenging every 60ms
   [junit4]   2> 398105 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@655c873d{static,/static,jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,AVAILABLE}
   [junit4]   2> 398404 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@33b20724{hdfs,/,file:///x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J2/temp/jetty-localhost-43104-hdfs-_-any-6596358922321797196.dir/webapp/,AVAILABLE}{/hdfs}
   [junit4]   2> 398405 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.AbstractConnector Started 
ServerConnector@538519af{HTTP/1.1,[http/1.1]}{localhost:43104}
   [junit4]   2> 398405 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.Server Started @398467ms
   [junit4]   2> 400089 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.h.h.s.c.MetricsLoggerTask Metrics logging will not be async since the 
logger is not log4j
   [junit4]   2> 400518 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 400527 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[A23D00C6F401CC94]-worker) [
] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: 

[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-03-08 Thread Richard (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788105#comment-16788105
 ] 

Richard commented on SOLR-13240:


I have come across this same problem, except I notice it when experimenting 
with the autoscaling features, and when Solr is running the ComputePlan. 

I did some debugging of the underlying culprit:
{code:java}
List> validReplicas = getValidReplicas(true, true, -1);
for ( Pair x : validReplicas ) {
  log.warn("Pair: {}", x.first());
}
validReplicas.sort(leaderLast);
{code}

And found the following results, which I find really odd _(I've removed any 
company sensitive information hence the collection names and node urls)_
{code:json}
  {
"first": {
  "core_node95": {
"core": "collection_name1_shard8_replica_n92",
"leader": "true",
"INDEX.sizeInBytes": 6.426125764846802e-08,
"base_url": "http://127.0.0.1:8081/solr;,
"node_name": "127.0.0.1:8081_solr",
"state": "active",
"type": "NRT",
"force_set_state": "false",
"shard": "shard8",
"collection": "collection_name1"
  }
},
"second": "127.0.0.1:8081_solr"
  },
  {
"first": {
  "core_node7": {
"core": "collection_name1_shard12_replica_n4",
"leader": "true",
"INDEX.sizeInBytes": 6.426125764846802e-08,
"base_url": "http://127.0.0.1:8081/solr;,
"node_name": "127.0.0.1:8081_solr",
"state": "active",
"type": "NRT",
"force_set_state": "false",
"shard": "shard12",
"collection": "collection_name1"
  }
},
"second": "127.0.0.1:8081_solr"
  },
  {
"first": {
  "7851000": {
"type": "NRT",
"INDEX.sizeInBytes": 6.426125764846802e-08,
"core": "7851000",
"shard": "shard2",
"collection": "collection_name2",
"node_name": "0.0.0.0:8080_solr"
  }
},
"second": "0.0.0.0:8080_solr"
  }
{code}

I've tried to create a simple unit test for the Comparator function that can be 
found in {{MoveReplicaSuggester.java}} _(which is causing this problem)_, 
however, I am struggling to produce similar results to make it cause an 
exception. I've tried a list of {{replicaInfo}}'s where they're all leaders, no 
one is a leader, all but one is a leader, inverted and alternating. So I'm not 
sure if what's getting the ReplicaInfo is possibly the problem if it's creating 
a {{Pair}} object with that kind of data _(the last JSON 
blob)_ 

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> 

RE: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Uwe Schindler
Hi,

 

I did the same tests like last time. Here the results:

 

https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/18/console

SUCCESS! [2:36:19.284235]

 

The testing was done with Java 8 and Java 9 (this is why it took longer).

 

I also checked Changes.txt, looks fine!

 

I also checked the ZIP files of Lucene and Solr. Lucene looks as usual, 
MIGRATE.txt is also fine – thanks for adding recent information! JAR files also 
look fine, compiled with correct version of Java and patches multi-release 
class files are there.

 

Apache Solr was unzipped and quickly tested on Windows: Startup with Java 8 and 
Java 11 worked without any problems from a directory with whitespace in path. I 
was able to do a HTTP/2 request with CURL (non-TLS) on both O/S (as ALPN is not 
available without TLS, the curl client needed to upgrade the request, so you 
see “101 Switching Protocols”):

 

Uwe Schindler@VEGA:~ > curl -I --http2 localhost:8983/solr/

HTTP/1.1 101 Switching Protocols

 

HTTP/2 200

x-frame-options: DENY

content-type: text/html;charset=utf-8

content-length: 14662

 

So, it worked. In the webbrowser it did not use HTTP/2, as browsers require SSL 
and ALPN for that (by default).

 

Next I enabled TLS support by creating a keystore. This time it was possible to 
start Solr on Java 8 and Java 11 – bug fixed. With Solr running in Java 8, CURL 
was only able to do a HTTP/1.1 request because ALPN told this to the curl 
client:

 

Uwe Schindler@VEGA:~ > curl -k -I --http2 https://localhost:8983/solr/

HTTP/1.1 200 OK

X-Frame-Options: DENY

Content-Type: text/html;charset=utf-8

Content-Length: 14662

 

With Java 11, my curl test worked with HTTP/2, this time no protocol switch 
needed as ALPN is active:

 

Uwe Schindler@VEGA:~ > curl -k -I --http2 https://localhost:8983/solr/

HTTP/2 200

x-frame-options: DENY

content-type: text/html;charset=utf-8

content-length: 14662

 

I also checked in Chrome browser, this time it uses HTTP/2 to communicate with 
the admin interface. Fine!

 

+1 to release Lucene/Solr 8.0.0 (RC4).

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: jim ferenczi  
Sent: Friday, March 8, 2019 1:43 PM
To: dev@lucene.apache.org
Subject: [VOTE] Release Lucene/Solr 8.0.0 RC4

 

Please vote for release candidate 4 for Lucene/Solr 8.0.0

The artifacts can be downloaded from 
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
You can run the smoke tester directly with this command:
python3 -u dev-tools/scripts/smokeTestRelease.py 
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979

 

Here's my +1

SUCCESS! [1:19:23.500783]



Re: [DISCUSS] Opening old indices for reading

2019-03-08 Thread Cassandra Targett
I have a question about Simon’s commit that he discussed in an earlier mail to 
this thread, found at 
https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752

I see the commit diffs and files changed in GitHub at the above URL, but one 
odd thing about it is that it doesn’t refer to any branch and a scan of the 
code doesn’t show these changes at all. I looked for branches and PRs and 
didn’t find anything that jumped out at me. There also weren’t any 
notifications to the commits@lucene.a.o list about these changes.

So, were the changes really made? Was it just intended as some code for 
discussion, or was it meant to be in master branch? If the former, how does one 
make a commit without a branch? If the change was intended to be in master, 
though, it seems something has gone awry and we should try to fix it.

Cassandra
On Jan 31, 2019, 8:23 AM -0600, Adrien Grand , wrote:
> This looks reasonable to me.
>
> On Tue, Jan 29, 2019 at 4:23 PM Simon Willnauer
>  wrote:
> >
> > thanks folks,
> >
> > these are all good points. I created a first cut of what I had in mind
> > [1] . It's relatively simple and from a java visibility perspective
> > the only change that a user can take advantage of is this [2] and this
> > [3] respectively. This would allow opening indices back to Lucene 7.0
> > given that the codecs and postings formats are available. From a
> > documentation perspective I added [4]. Thisi s a pure read-only change
> > and doesn't allow opening these indices for writing. You can't merge
> > them neither would you be able to open an index writer on top of it. I
> > still need to add support to Check-Index but that's what it is
> > basically.
> >
> > lemme know what you think,
> >
> > simon
> > [1] 
> > https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752
> > [2] 
> > https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752#diff-e0352098b027d6f41a17c068ad8d7ef0R689
> > [3] 
> > https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752#diff-e3ccf9ee90355b10f2dd22ce2da6c73cR306
> > [4] 
> > https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752#diff-1bedf4d0d52ff88ef8a16a6788ad7684R86
> >
> > On Fri, Jan 25, 2019 at 3:14 PM Michael McCandless
> >  wrote:
> > >
> > > Another example is long ago Lucene allowed pos=-1 to be indexed and it 
> > > caused all sorts of problems. We also stopped allowing positions close to 
> > > Integer.MAX_VALUE (https://issues.apache.org/jira/browse/LUCENE-6382). 
> > > Yet another is allowing negative vInts which are possible but horribly 
> > > inefficient (https://issues.apache.org/jira/browse/LUCENE-3738).
> > >
> > > We do need to be free to fix these problems and then know after N+2 
> > > releases that no index can have the issue.
> > >
> > > I like the idea of providing "expert" / best effort / limited way of 
> > > carrying forward such ancient indices, but I think the huge challenge for 
> > > someone using that tool on an important index will be enumerating the 
> > > list of issues that might "matter" (the 3 Adrien listed + the 3 I listed 
> > > above is a start for this list) and taking appropriate steps to "correct" 
> > > the index if so. E.g. on a norms encoding change, somehow these expert 
> > > tools must decode norms the old way, encode them the new way, and then 
> > > rewrite the norms files. Or if the index has pos=-1, changing that to 
> > > pos=0. Or if it has negative vInts, ... etc.
> > >
> > > Or maybe the "special" DirectoryReader only reads stored fields? And so 
> > > you would enumerate your _source and reindex into the latest format ...
> > >
> > > > Something like https://issues.apache.org/jira/browse/LUCENE-8277 would
> > > > help make it harder to introduce corrupt data in an index.
> > >
> > > +1
> > >
> > > Every time we catch something like "don't allow pos = -1 into the index" 
> > > we need somehow remember to go and add the check also in addIndices.
> > >
> > > Mike McCandless
> > >
> > > http://blog.mikemccandless.com
> > >
> > >
> > > On Fri, Jan 25, 2019 at 3:52 AM Adrien Grand  wrote:
> > > >
> > > > Agreed with Michael that setting expectations is going to be
> > > > important. The thing that I would like to make sure is that we would
> > > > never refrain from moving Lucene forward because of this feature. In
> > > > particular, lucene-core should be free to make assumptions that are
> > > > valid for N and N-1 indices without worrying about the fact that we
> > > > have this super-expert feature that allows opening older indices. Here
> > > > are some assumptions that I have in mind which have not always been
> > > > true:
> > > > - norms might be encoded in a different way (this changed in 7)
> > > > - all index files have a checksum (only true since Lucene 5)
> > > > - offsets are always going forward (only enforced since Lucene 7)
> > > >
> > > > This means that carrying 

[jira] [Reopened] (SOLR-12732) TestLogWatcher failure on Jenkins

2019-03-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-12732:
---

I see another one, 
{code}
java.lang.AssertionError: expected:<-1> but was:<1552044187760>
at 
__randomizedtesting.SeedInfo.seed([47AF3982482428B:F9E1A3D8DD8AFD85]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.logging.TestLogWatcher.testLog4jWatcher(TestLogWatcher.java:52)
{code}

What kind of message a newly-opened logger would have in it I don't know, may 
need to put some additional logging in.

> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #471: SOLR-8335 HdfsLockFactory should eventually release index after crash.

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #471: SOLR-8335 HdfsLockFactory 
should eventually release index after crash.
URL: https://github.com/apache/lucene-solr/pull/471#discussion_r263850196
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/store/hdfs/HdfsLockFactoryTest.java
 ##
 @@ -17,46 +17,173 @@
 package org.apache.solr.store.hdfs;
 
 import java.io.IOException;
+import java.lang.invoke.MethodHandles;
+import java.net.ConnectException;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Delayed;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Consumer;
 
+import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.lucene.store.Lock;
+import org.apache.lucene.store.LockLostException;
 import org.apache.lucene.store.LockObtainFailedException;
+import org.apache.lucene.util.TestRuleRestoreSystemProperties;
 import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.cloud.hdfs.HdfsTestUtil;
 import org.apache.solr.util.BadHdfsThreadsFilter;
+import org.junit.After;
 import org.junit.AfterClass;
+import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.Rule;
 import org.junit.Test;
+import org.mockito.internal.util.reflection.FieldSetter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static java.util.Arrays.asList;
+import static 
org.apache.solr.store.hdfs.HdfsLockFactory.DEFAULT_LOCK_HOLD_TIMEOUT;
+import static org.apache.solr.store.hdfs.HdfsLockFactory.DEFAULT_UPDATE_DELAY;
+import static org.apache.solr.store.hdfs.HdfsLockFactory.LOCK_HOLD_TIMEOUT_KEY;
+import static org.mockito.BDDMockito.given;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyLong;
+import static org.mockito.Mockito.mock;
 
-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;
 
 @ThreadLeakFilters(defaultFilters = true, filters = {
 BadHdfsThreadsFilter.class // hdfs currently leaks thread(s)
 })
 public class HdfsLockFactoryTest extends SolrTestCaseJ4 {
-  
+  private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
   private static MiniDFSCluster dfsCluster;
+  private static NameNode nameNode;
+  private static Configuration nameNodeConf;
+  private long waitTime = 0;
+  private List allTasks = new LinkedList<>();
+  private HdfsDirectory dir;
+  private Path lockPath;
+  private HdfsLockFactory.HdfsLock lock = null;
+  private HdfsLockFactory.HdfsLock lock2 = null;
+  private Exception latestException = null;
+  private List allLocks = new LinkedList<>();
+  private static MiniDFSCluster.DataNodeProperties dataNodeProperties;
+
+  @Rule
+  public TestRuleRestoreSystemProperties p = new 
TestRuleRestoreSystemProperties(LOCK_HOLD_TIMEOUT_KEY);
 
 Review comment:
   Is this necessary?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #471: SOLR-8335 HdfsLockFactory should eventually release index after crash.

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #471: SOLR-8335 HdfsLockFactory 
should eventually release index after crash.
URL: https://github.com/apache/lucene-solr/pull/471#discussion_r263849042
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/update/TestInPlaceUpdatesStandalone.java
 ##
 @@ -18,6 +18,7 @@
 
 
 Review comment:
   Unnecessary changes in this file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #471: SOLR-8335 HdfsLockFactory should eventually release index after crash.

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #471: SOLR-8335 HdfsLockFactory 
should eventually release index after crash.
URL: https://github.com/apache/lucene-solr/pull/471#discussion_r263848915
 
 

 ##
 File path: 
solr/test-framework/src/java/org/apache/solr/SolrIgnoredThreadsFilter.java
 ##
 @@ -60,6 +61,11 @@ public boolean reject(Thread t) {
   return true;
 }
 
+if (threadName.startsWith(HdfsLockFactory.HDFS_LOCK_UPDATE_PREFIX)) {
 
 Review comment:
   This doesn't seem right. We should be able to shutdown our threads 
correctly...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2994 - Unstable

2019-03-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2994/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.0/14/consoleText

[repro] Revision: a9d80bf18af04850ea21dcf88a40367559fb246e

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=OpenCloseCoreStressTest 
-Dtests.method=test10Minutes -Dtests.seed=728780917FE3B4B8 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.locale=lt-LT -Dtests.timezone=America/Ensenada -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
20de3d2ee0d234d04bbcf7c2cddc18ff67a09f8b
[repro] git fetch
[repro] git checkout a9d80bf18af04850ea21dcf88a40367559fb246e

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   OpenCloseCoreStressTest
[repro] ant compile-test

[...truncated 3572 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.OpenCloseCoreStressTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.seed=728780917FE3B4B8 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.locale=lt-LT -Dtests.timezone=America/Ensenada -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 3717577 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.core.OpenCloseCoreStressTest
[repro] git checkout 20de3d2ee0d234d04bbcf7c2cddc18ff67a09f8b

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13244:
---
Fix Version/s: 8.1

> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
> Attachments: solr13244.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13244:
---
Attachment: solr13244.png

> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: solr13244.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788026#comment-16788026
 ] 

Jan Høydahl commented on SOLR-13244:


PR [#600|https://github.com/apache/lucene-solr/pull/600] has a fix, see 
screenshot:

!solr13244.png!

> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: solr13244.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy opened a new pull request #600: SOLR-13244: Nodes view fails when a node is temporarily down

2019-03-08 Thread GitBox
janhoy opened a new pull request #600: SOLR-13244: Nodes view fails when a node 
is temporarily down
URL: https://github.com/apache/lucene-solr/pull/600
 
 
   First fix.
   * Do not request metrics from dead nodes
   * Display dead nodes with red background in table
   * Fetch host-level metrics from first live node on host
   
   Still needs CHANGES entry


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13244) Nodes view fails when a node is temporarily down

2019-03-08 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-13244:
--

Assignee: Jan Høydahl

> Nodes view fails when a node is temporarily down
> 
>
> Key: SOLR-13244
> URL: https://issues.apache.org/jira/browse/SOLR-13244
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> The Cloud->Nodes view lists all nodes, grouped by host. However, if a node is 
> temporarily down (not in live_nodes), the whole screen is broken, and instead 
> error messages like
> {noformat}
> Requested node example.com:8983_solr is not part of cluster{noformat}
> This happens on the Ajax request to fetch {{admin/metrics}} and 
> {{admin/info/system}}. A better approach would be to skip requesting metrics 
> for downed nodes but still display them in the list, perhaps with a DOWN 
> label or other background colour, to clearly tell that the node is configured 
> but not live.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13308) Move SystemPropertiesRestoreRule to SolrTestCase

2019-03-08 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788017#comment-16788017
 ] 

Erick Erickson commented on SOLR-13308:
---

In particular, IDK whether doing this in the base class will also reset values 
in the subclasses.

> Move SystemPropertiesRestoreRule to SolrTestCase
> 
>
> Key: SOLR-13308
> URL: https://issues.apache.org/jira/browse/SOLR-13308
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0), 8.1
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> [~krisden] just traced flaky HDFS tests to not clearing sysprops, see the 
> discussion at SOLR-13297.
> Part of SOLR-13268 is to derive all Solr test classes from SolrTestCase.
> Once that's pushed, does it make sense to put that rule in SolrTestCase so it 
> gets performed automagically so this kind of error doesn't creep back in?
> I don't know the details of how that rule is implemented or whether this 
> makes sense at that level but wanted to discuss.
> I've assigned it to myself, but that's just so I don't lose track of it, but 
> anyone else who wants to pick it up please feel free.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13308) Move SystemPropertiesRestoreRule to SolrTestCase

2019-03-08 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-13308:
-

 Summary: Move SystemPropertiesRestoreRule to SolrTestCase
 Key: SOLR-13308
 URL: https://issues.apache.org/jira/browse/SOLR-13308
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0), 8.1
Reporter: Erick Erickson
Assignee: Erick Erickson


[~krisden] just traced flaky HDFS tests to not clearing sysprops, see the 
discussion at SOLR-13297.

Part of SOLR-13268 is to derive all Solr test classes from SolrTestCase.

Once that's pushed, does it make sense to put that rule in SolrTestCase so it 
gets performed automagically so this kind of error doesn't creep back in?

I don't know the details of how that rule is implemented or whether this makes 
sense at that level but wanted to discuss.

I've assigned it to myself, but that's just so I don't lose track of it, but 
anyone else who wants to pick it up please feel free.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Alan Woodward
+1

SUCCESS! [1:46:36.093805]

> On 8 Mar 2019, at 15:18, Ignacio Vera  wrote:
> 
> +1
> 
> SUCCESS! [1:26:13.469237]
> 
> On Fri, Mar 8, 2019 at 4:00 PM Jan Høydahl  > wrote:
> +1
> 
> SUCCESS! [1:37:19.142875]
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com 
> 
>> 8. mar. 2019 kl. 13:43 skrev jim ferenczi > >:
>> 
>> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>> 
>> The artifacts can be downloaded from 
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
>>  
>> 
>> You can run the smoke tester directly with this command:
>> python3 -u dev-tools/scripts/smokeTestRelease.py 
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
>>  
>> 
>> 
>> Here's my +1
>> SUCCESS! [1:19:23.500783]
> 



[jira] [Resolved] (SOLR-3769) unit test failing in solr4 beta

2019-03-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-3769.
--
Resolution: Won't Do

We won't be going back to fix 4.0 tests.

> unit test failing in solr4 beta
> ---
>
> Key: SOLR-3769
> URL: https://issues.apache.org/jira/browse/SOLR-3769
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Gary Yngve
>Priority: Major
>
> ant test  -Dtestcase=LeaderElectionTest -Dtests.seed=3F5F939070D8F356 
> -Dtests.slow=true -Dtests.locale=sl_SI -Dtests.timezone=Africa/El_Aaiun 
> -Dtests.file.encoding=US-ASCII
> [junit4:junit4] ERROR   0.00s | LeaderElectionTest (suite)
> [junit4:junit4]> Throwable #1: java.lang.AssertionError: ERROR: 
> SolrZkClient opens=33 closes=28
> [junit4:junit4]>  at 
> __randomizedtesting.SeedInfo.seed([3F5F939070D8F356]:0)
> [junit4:junit4]>  at org.junit.Assert.fail(Assert.java:93)
> [junit4:junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.endTrackingZkClients(SolrTestCaseJ4.java:230)
> [junit4:junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:83)
> [junit4:junit4]>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [junit4:junit4]>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> [junit4:junit4]>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [junit4:junit4]>  at java.lang.reflect.Method.invoke(Method.java:601)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1995)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$1100(RandomizedRunner.java:132)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:754)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
> [junit4:junit4]>  at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleReportUncaughtExceptions$1.evaluate(TestRuleReportUncaughtExceptions.java:68)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleNoInstanceHooksOverrides$1.evaluate(TestRuleNoInstanceHooksOverrides.java:53)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleNoStaticHooksShadowing$1.evaluate(TestRuleNoStaticHooksShadowing.java:52)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:36)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
> [junit4:junit4]>  at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:605)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.access$400(RandomizedRunner.java:132)
> [junit4:junit4]>  at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:551)
> [junit4:junit4]>
> [junit4:junit4] Completed in 221.77s, 4 tests, 1 failure, 1 error <<< 
> FAILURES!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.0-Windows (64bit/jdk1.8.0_172) - Build # 63 - Unstable!

2019-03-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.0-Windows/63/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:510)  
at org.apache.solr.core.SolrCore.(SolrCore.java:961)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:876)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:503)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:424) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1184)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:772)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:969)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:876)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1056)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:876)  at 

[jira] [Commented] (SOLR-13283) when facet with certain terms via {!terms}, facet.limit, facet.offset does not work

2019-03-08 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787973#comment-16787973
 ] 

Mikhail Khludnev commented on SOLR-13283:
-

bq. Also, {!terms} format should be covered in documentation with set of 
features supported for this format
It's already documented 
https://lucene.apache.org/solr/guide/7_6/faceting.html#limiting-facet-with-certain-terms
 . in 7.7 ref guide sorting is documented 

> when facet with certain terms via {!terms}, facet.limit, facet.offset does 
> not work
> ---
>
> Key: SOLR-13283
> URL: https://issues.apache.org/jira/browse/SOLR-13283
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.7
>Reporter: superman369
>Priority: Major
>
> h4.  I do a test in solr7.7.   Limiting field facet with certain terms via 
> \{!terms}  is work use facet.sort=count, but facet.limit, facet.offset does 
> not work. What's wrong?
> h4. for example: 
> facet.field=\{!terms='1453,1452,1248'}location.authorIds=true=1=1,
>   facet result  : "location.authorIds" has 3 items.  I think it should  have 
> one item. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13297) StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3

2019-03-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787963#comment-16787963
 ] 

Kevin Risden commented on SOLR-13297:
-

In the one case HdfsUnloadDistributedZkTest, should be fixed by SOLR-13307, in 
the reproduction case there is something else going on. I'll look at that 
separately after SOLR-13307.

> StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3
> -
>
> Key: SOLR-13297
> URL: https://issues.apache.org/jira/browse/SOLR-13297
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: builds.a.o_solr_badapple_nightly_8x_7.txt.gz, 
> builds.a.o_solr_badapple_nightly_master_51.txt.gz, 
> builds.a.o_solr_nightly_8x_36.txt.gz, builds.a.o_solr_repro_2965.txt.gz
>
>
> As reported by [~hossman] on SOLR-9515, StressHdfsTest and 
> HdfsUnloadDistributedZkTest are failing more regularly. Should figure out 
> what is going on here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Ignacio Vera
+1

SUCCESS! [1:26:13.469237]

On Fri, Mar 8, 2019 at 4:00 PM Jan Høydahl  wrote:

> +1
>
> SUCCESS! [1:37:19.142875]
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 8. mar. 2019 kl. 13:43 skrev jim ferenczi :
>
> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>
> The artifacts can be downloaded from
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
> *
>
> Here's my +1
> SUCCESS! [1:19:23.500783]
>
>
>


[jira] [Commented] (SOLR-13297) StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3

2019-03-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787962#comment-16787962
 ] 

Kevin Risden commented on SOLR-13297:
-

Went through the logs for StressHdfsTest and in each case SOLR-13307 should 
help. Checking logs for HdfsUnloadDistributedZkTest

> StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3
> -
>
> Key: SOLR-13297
> URL: https://issues.apache.org/jira/browse/SOLR-13297
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: builds.a.o_solr_badapple_nightly_8x_7.txt.gz, 
> builds.a.o_solr_badapple_nightly_master_51.txt.gz, 
> builds.a.o_solr_nightly_8x_36.txt.gz, builds.a.o_solr_repro_2965.txt.gz
>
>
> As reported by [~hossman] on SOLR-9515, StressHdfsTest and 
> HdfsUnloadDistributedZkTest are failing more regularly. Should figure out 
> what is going on here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263813103
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/MoveReplicaHDFSFailoverTest.java
 ##
 @@ -63,9 +63,17 @@ public static void setupClass() throws Exception {
 
   @AfterClass
   public static void teardownClass() throws Exception {
-cluster.shutdown(); // need to close before the MiniDFSCluster
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  shutdownCluster();
+} finally {
+  try {
+HdfsTestUtil.teardownClass(dfsCluster);
+  } finally {
+dfsCluster = null;
+System.clearProperty("solr.hdfs.home");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13307) Ensure HDFS tests clear System properties they set

2019-03-08 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13307:

Attachment: SOLR-13307.patch

> Ensure HDFS tests clear System properties they set
> --
>
> Key: SOLR-13307
> URL: https://issues.apache.org/jira/browse/SOLR-13307
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13307.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While looking at SOLR-13297, found there are system properties that are not 
> cleared in HDFS tests. This can cause other HDFS tests in the same JVM to 
> have weird configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-03-08 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787937#comment-16787937
 ] 

Munendra S N commented on SOLR-7414:


[~dsmiley]
In both CSV and Xlsx response writer, fields to be returned are computed 
explicitly(which is written first in the response and fields order needs to be 
maintained). I think this is the reason for current behavior with both response 
writers

> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch, SOLR-7414.patch, SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7=id,stocked:inStock=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7=*,stocked:inStock=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7=stocked:inStock,*=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Jan Høydahl
+1

SUCCESS! [1:37:19.142875]

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. mar. 2019 kl. 13:43 skrev jim ferenczi :
> 
> Please vote for release candidate 4 for Lucene/Solr 8.0.0
> 
> The artifacts can be downloaded from 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
>  
> 
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
>  
> 
> 
> Here's my +1
> SUCCESS! [1:19:23.500783]



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Kevin Risden
+1 SUCCESS! [1:16:56.071493]

Kevin Risden


On Fri, Mar 8, 2019 at 9:49 AM Michael McCandless 
wrote:

> +1
>
> SUCCESS! [0:43:36.323853]
>
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Mar 8, 2019 at 7:43 AM jim ferenczi  wrote:
>
>> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>>
>> The artifacts can be downloaded from
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
>> You can run the smoke tester directly with this command:
>> python3 -u dev-tools/scripts/smokeTestRelease.py 
>> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
>> *
>>
>> Here's my +1
>> SUCCESS! [1:19:23.500783]
>>
>


Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Michael McCandless
+1

SUCCESS! [0:43:36.323853]


Mike McCandless

http://blog.mikemccandless.com


On Fri, Mar 8, 2019 at 7:43 AM jim ferenczi  wrote:

> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>
> The artifacts can be downloaded from
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> *https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
> *
>
> Here's my +1
> SUCCESS! [1:19:23.500783]
>


[jira] [Commented] (SOLR-13307) Ensure HDFS tests clear System properties they set

2019-03-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787917#comment-16787917
 ] 

Kevin Risden commented on SOLR-13307:
-

Opened a PR to review this and run tests on that branch.

> Ensure HDFS tests clear System properties they set
> --
>
> Key: SOLR-13307
> URL: https://issues.apache.org/jira/browse/SOLR-13307
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While looking at SOLR-13297, found there are system properties that are not 
> cleared in HDFS tests. This can cause other HDFS tests in the same JVM to 
> have weird configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.0.0 RC4

2019-03-08 Thread Adrien Grand
+1 SUCCESS! [1:33:23.139194]

On Fri, Mar 8, 2019 at 1:43 PM jim ferenczi  wrote:
>
> Please vote for release candidate 4 for Lucene/Solr 8.0.0
>
> The artifacts can be downloaded from 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979/
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.0.0-RC4-rev2ae4746365c1ee72a0047ced7610b2096e438979
>
> Here's my +1
> SUCCESS! [1:19:23.500783]



-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13283) when facet with certain terms via {!terms}, facet.limit, facet.offset does not work

2019-03-08 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787925#comment-16787925
 ] 

Munendra S N edited comment on SOLR-13283 at 3/8/19 2:24 PM:
-

[~superman369]
 If understand correctly, \{!terms} is a special case where you would want only 
terms specified should be returned in facet response along with count.
 Until recently, none of the facet field features used to work on this. In 
SOLR-13156, facet sorting support has been added. Since, in some cases terms 
might need to be sorted by count. Also, that it supported 
back-compatibility(sorted only when facet.sort)

I would like to understand as already required terms are specified what is the 
usecase to restrict the terms in facet response? Similar explanation goes for 
offset too.
 There maybe a case of mincount support, where \{!terms} could be used for 
exists functionality with cardinality greater than 1(For example, terms with 
greater than equal to given count)
 Also, Should \{!terms} support other facet field features like missing, exists 
and etc

I think, most of features are not required here. Also, \{!terms} format should 
be covered in documentation with set of features supported for this format


was (Author: munendrasn):
[~superman369]
 If understand correctly, \{!terms} is a special case where you would want only 
terms specified should be returned in facet response along with count.
 Until recently, none of the facet field features used to work on this. In 
SOLR-13156, facet sorting support has been added. Since, in some cases terms 
might need to be sorted by count. Also, that it supported 
back-compatibility(sorted only when facet.sort)
 
 I would like to understand as already required terms are specified what is the 
usecase to restrict the terms in facet response? Similar explanation goes for 
offset too.
 There maybe a case of mincount support, where \{!terms} could be used for 
exists functionality with cardinality greater than 1(For example, terms with 
greater than equal to given count)
 Also, Should

{!terms} support other facet field features like missing, exists and etc
 
 I think, most of features are not required here. Also, \{!terms}

format should be covered in documentation with set of features supported for 
this format

> when facet with certain terms via {!terms}, facet.limit, facet.offset does 
> not work
> ---
>
> Key: SOLR-13283
> URL: https://issues.apache.org/jira/browse/SOLR-13283
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.7
>Reporter: superman369
>Priority: Major
>
> h4.  I do a test in solr7.7.   Limiting field facet with certain terms via 
> \{!terms}  is work use facet.sort=count, but facet.limit, facet.offset does 
> not work. What's wrong?
> h4. for example: 
> facet.field=\{!terms='1453,1452,1248'}location.authorIds=true=1=1,
>   facet result  : "location.authorIds" has 3 items.  I think it should  have 
> one item. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13283) when facet with certain terms via {!terms}, facet.limit, facet.offset does not work

2019-03-08 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787925#comment-16787925
 ] 

Munendra S N commented on SOLR-13283:
-

[~superman369]
 If understand correctly, \{!terms} is a special case where you would want only 
terms specified should be returned in facet response along with count.
 Until recently, none of the facet field features used to work on this. In 
SOLR-13156, facet sorting support has been added. Since, in some cases terms 
might need to be sorted by count. Also, that it supported 
back-compatibility(sorted only when facet.sort)
 
 I would like to understand as already required terms are specified what is the 
usecase to restrict the terms in facet response? Similar explanation goes for 
offset too.
 There maybe a case of mincount support, where \{!terms} could be used for 
exists functionality with cardinality greater than 1(For example, terms with 
greater than equal to given count)
 Also, Should

{!terms} support other facet field features like missing, exists and etc
 
 I think, most of features are not required here. Also, \{!terms}

format should be covered in documentation with set of features supported for 
this format

> when facet with certain terms via {!terms}, facet.limit, facet.offset does 
> not work
> ---
>
> Key: SOLR-13283
> URL: https://issues.apache.org/jira/browse/SOLR-13283
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.7
>Reporter: superman369
>Priority: Major
>
> h4.  I do a test in solr7.7.   Limiting field facet with certain terms via 
> \{!terms}  is work use facet.sort=count, but facet.limit, facet.offset does 
> not work. What's wrong?
> h4. for example: 
> facet.field=\{!terms='1453,1452,1248'}location.authorIds=true=1=1,
>   facet result  : "location.authorIds" has 3 items.  I think it should  have 
> one item. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13307) Ensure HDFS tests clear System properties they set

2019-03-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787920#comment-16787920
 ] 

Kevin Risden edited comment on SOLR-13307 at 3/8/19 2:21 PM:
-

Overview of what was happening:
 * HDFS test runs and sets some System properties like blocksperbank
 * HDFS test finishes and doesn't clear the system property
 * Another HDFS test runs in the same JVM and uses the existing System property.
 * This second HDFS test fails since the system property should not have been 
set.

I checked some of the recent failures and they all had at least one test in the 
same JVM that had missed clearing System properties. In some cases, the system 
properties were set again so thats why we don't too many failures from this.

It also explains why I could not reproduce any individual test since it was the 
left over changes from a different test causing issues.


was (Author: risdenk):
Overview of what was happening:
 * HDFS test runs and sets some System properties like blocksperbank
 * HDFS test finishes and doesn't clear the system property
 * Another HDFS test runs in the same JVM and uses the existing System property.
 * This second HDFS test fails since the system property should not have been 
set.

I checked some of the recent failures and they all had at least one test in the 
same JVM that had missed clearing System properties. In some cases, the system 
properties were set again so thats why we don't too many failures from this.

> Ensure HDFS tests clear System properties they set
> --
>
> Key: SOLR-13307
> URL: https://issues.apache.org/jira/browse/SOLR-13307
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While looking at SOLR-13297, found there are system properties that are not 
> cleared in HDFS tests. This can cause other HDFS tests in the same JVM to 
> have weird configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13307) Ensure HDFS tests clear System properties they set

2019-03-08 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787920#comment-16787920
 ] 

Kevin Risden commented on SOLR-13307:
-

Overview of what was happening:
 * HDFS test runs and sets some System properties like blocksperbank
 * HDFS test finishes and doesn't clear the system property
 * Another HDFS test runs in the same JVM and uses the existing System property.
 * This second HDFS test fails since the system property should not have been 
set.

I checked some of the recent failures and they all had at least one test in the 
same JVM that had missed clearing System properties. In some cases, the system 
properties were set again so thats why we don't too many failures from this.

> Ensure HDFS tests clear System properties they set
> --
>
> Key: SOLR-13307
> URL: https://issues.apache.org/jira/browse/SOLR-13307
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While looking at SOLR-13297, found there are system properties that are not 
> cleared in HDFS tests. This can cause other HDFS tests in the same JVM to 
> have weird configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789982
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTlogReplayBufferedWhileIndexingTest.java
 ##
 @@ -30,35 +30,34 @@
 
 @Slow
 @Nightly
-// 12-Jun-2018 
@LuceneTestCase.BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028;)
-// commented out on: 24-Dec-2018 
@LuceneTestCase.BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028;)
 // added 20-Jul-2018
 @ThreadLeakFilters(defaultFilters = true, filters = {
 BadHdfsThreadsFilter.class // hdfs currently leaks thread(s)
 })
 public class HdfsTlogReplayBufferedWhileIndexingTest extends 
TlogReplayBufferedWhileIndexingTest {
-  
+  private static MiniDFSCluster dfsCluster;
+
   public HdfsTlogReplayBufferedWhileIndexingTest() throws Exception {
 super();
   }
 
-  private static MiniDFSCluster dfsCluster;
-  
   @BeforeClass
   public static void setupClass() throws Exception {
-dfsCluster = 
HdfsTestUtil.setupClass(createTempDir().toFile().getAbsolutePath());
 System.setProperty("solr.hdfs.blockcache.blocksperbank", "2048");
+dfsCluster = 
HdfsTestUtil.setupClass(createTempDir().toFile().getAbsolutePath());
   }
   
   @AfterClass
   public static void teardownClass() throws Exception {
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  HdfsTestUtil.teardownClass(dfsCluster);
+} finally {
+  dfsCluster = null;
+  System.clearProperty("solr.hdfs.blockcache.blocksperbank");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789777
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsChaosMonkeyNothingIsSafeTest.java
 ##
 @@ -33,20 +33,23 @@
 @ThreadLeakFilters(defaultFilters = true, filters = {
 BadHdfsThreadsFilter.class // hdfs currently leaks thread(s)
 })
-// commented out on: 24-Dec-2018 
@BadApple(bugUrl="https://issues.apache.org/jira/browse/SOLR-12028, 
https://issues.apache.org/jira/browse/SOLR-10191;)
 public class HdfsChaosMonkeyNothingIsSafeTest extends 
ChaosMonkeyNothingIsSafeTest {
   private static MiniDFSCluster dfsCluster;
   
   @BeforeClass
   public static void setupClass() throws Exception {
-dfsCluster = 
HdfsTestUtil.setupClass(createTempDir().toFile().getAbsolutePath());
 System.setProperty("solr.hdfs.blockcache.global", "true"); // always use 
global cache, this test can create a lot of directories
+dfsCluster = 
HdfsTestUtil.setupClass(createTempDir().toFile().getAbsolutePath());
   }
   
   @AfterClass
   public static void teardownClass() throws Exception {
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  HdfsTestUtil.teardownClass(dfsCluster);
+} finally {
+  dfsCluster = null;
+  System.clearProperty("solr.hdfs.blockcache.global");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789733
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/hdfs/HDFSCollectionsAPITest.java
 ##
 @@ -55,9 +55,17 @@ public static void setupClass() throws Exception {
 
   @AfterClass
   public static void teardownClass() throws Exception {
-cluster.shutdown(); // need to close before the MiniDFSCluster
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  shutdownCluster(); // need to close before the MiniDFSCluster
+} finally {
+  try {
+HdfsTestUtil.teardownClass(dfsCluster);
+  } finally {
+dfsCluster = null;
+System.clearProperty("solr.hdfs.blockcache.enabled");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789620
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/MoveReplicaHDFSTest.java
 ##
 @@ -50,8 +50,8 @@ public static void teardownClass() throws Exception {
   HdfsTestUtil.teardownClass(dfsCluster);
 } finally {
   dfsCluster = null;
-  System.setProperty("solr.hdfs.blockcache.blocksperbank", "512");
 
 Review comment:
   This was introduced recently for the Hadoop 3 upgrade. Copy/paste error but 
definitely causing some of the new test failures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789863
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsRecoveryZkTest.java
 ##
 @@ -51,9 +51,17 @@ public static void setupClass() throws Exception {
   
   @AfterClass
   public static void teardownClass() throws Exception {
-cluster.shutdown(); // need to close before the MiniDFSCluster
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  shutdownCluster(); // need to close before the MiniDFSCluster
+} finally {
+  try {
+HdfsTestUtil.teardownClass(dfsCluster);
+  } finally {
+dfsCluster = null;
+System.clearProperty("solr.hdfs.blockcache.blocksperbank");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789884
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsRestartWhileUpdatingTest.java
 ##
 @@ -50,14 +48,16 @@ public static void setupClass() throws Exception {
   
   @AfterClass
   public static void teardownClass() throws Exception {
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  HdfsTestUtil.teardownClass(dfsCluster);
+} finally {
+  dfsCluster = null;
+  System.clearProperty("solr.hdfs.blockcache.blocksperbank");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS tests clear System properties they set

2019-03-08 Thread GitBox
risdenk commented on a change in pull request #599: SOLR-13307: Ensure HDFS 
tests clear System properties they set
URL: https://github.com/apache/lucene-solr/pull/599#discussion_r263789812
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsChaosMonkeySafeLeaderTest.java
 ##
 @@ -45,8 +44,12 @@ public static void setupClass() throws Exception {
   
   @AfterClass
   public static void teardownClass() throws Exception {
-HdfsTestUtil.teardownClass(dfsCluster);
-dfsCluster = null;
+try {
+  HdfsTestUtil.teardownClass(dfsCluster);
+} finally {
+  dfsCluster = null;
+  System.clearProperty("solr.hdfs.blockcache.global");
 
 Review comment:
   Has never been cleaned up before.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >