Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1570/

2 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at https://127.0.0.1:39386/solr: Solr cloud with available 
number of nodes:1 is insufficient for restoring a collection with 2 shards, 
total replicas per shard 3 and maxShardsPerNode 4. Consider increasing 
maxShardsPerNode value OR number of available nodes.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:39386/solr: Solr cloud with available number 
of nodes:1 is insufficient for restoring a collection with 2 shards, total 
replicas per shard 3 and maxShardsPerNode 4. Consider increasing 
maxShardsPerNode value OR number of available nodes.
        at 
__randomizedtesting.SeedInfo.seed([622F3E9135CCDCD5:EA7B014B9B30B12D]:0)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
        at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
        at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:320)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testLeftOuterJoinStream

Error Message:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1_collection

Stack Trace:
java.io.IOException: org.apache.solr.common.SolrException: Could not load 
collection from ZK: collection1_collection
        at 
__randomizedtesting.SeedInfo.seed([8DC5860AF43C3ED6:A9A35F2ADF7C0FAD]:0)
        at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:378)
        at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:274)
        at 
org.apache.solr.client.solrj.io.stream.PushBackStream.open(PushBackStream.java:71)
        at 
org.apache.solr.client.solrj.io.stream.JoinStream.open(JoinStream.java:150)
        at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.getTuples(StreamDecoratorTest.java:3916)
        at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testLeftOuterJoinStream(StreamDecoratorTest.java:1822)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Could not load collection from 
ZK: collection1_collection
        at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1316)
        at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:732)
        at 
org.apache.solr.common.cloud.ClusterState$CollectionRef.get(ClusterState.java:386)
        at 
org.apache.solr.common.cloud.ClusterState.getCollectionsMap(ClusterState.java:164)
        at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.getSlices(CloudSolrStream.java:331)
        at 
org.apache.solr.client.solrj.io.stream.TupleStream.getShards(TupleStream.java:141)
        at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:363)
        ... 44 more
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for 
/collections/collection1_collection/state.json
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
        at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215)
        at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:341)
        at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
        at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:341)
        at 
org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:1328)
        at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1314)
        ... 50 more




Build Log:
[...truncated 13400 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore
   [junit4]   2> 645059 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/init-core-data-001
   [junit4]   2> 645059 WARN  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2
   [junit4]   2> 645154 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 645168 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 645171 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001
   [junit4]   2> 645171 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 645192 INFO  (Thread-774) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 645192 INFO  (Thread-774) [    ] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 645229 ERROR (Thread-774) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 645302 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.c.ZkTestServer start zk server on port:40215
   [junit4]   2> 645377 INFO  (zkConnectionManagerCallback-906-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 645435 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.Server jetty-9.4.10.v20180503; built: 2018-05-03T15:56:21.710Z; git: 
daa59876e6f384329b122929e70a80934569428c; jvm 1.8.0_172-b11
   [junit4]   2> 645498 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.Server jetty-9.4.10.v20180503; built: 2018-05-03T15:56:21.710Z; git: 
daa59876e6f384329b122929e70a80934569428c; jvm 1.8.0_172-b11
   [junit4]   2> 645557 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 645606 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 645608 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 645643 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 645643 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 645643 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 645663 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@575c6ece{/solr,null,AVAILABLE}
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@112cb1f7{SSL,[ssl, 
http/1.1]}{127.0.0.1:39386}
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.e.j.s.Server Started @645743ms
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=39386}
   [junit4]   2> 645665 ERROR (jetty-launcher-903-thread-2) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 645665 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-06-24T01:17:02.880Z
   [junit4]   2> 645683 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5ce68993{/solr,null,AVAILABLE}
   [junit4]   2> 645683 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@67f96de4{SSL,[ssl, 
http/1.1]}{127.0.0.1:40995}
   [junit4]   2> 645716 INFO  (jetty-launcher-903-thread-1) [    ] 
o.e.j.s.Server Started @645794ms
   [junit4]   2> 645716 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=40995}
   [junit4]   2> 645716 ERROR (jetty-launcher-903-thread-1) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 645717 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 645717 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 645717 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 645717 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 645717 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-06-24T01:17:02.932Z
   [junit4]   2> 645724 INFO  (zkConnectionManagerCallback-908-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 645737 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 645755 INFO  (zkConnectionManagerCallback-910-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 645756 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 647215 INFO  (jetty-launcher-903-thread-1) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:40215/solr
   [junit4]   2> 647256 INFO  (zkConnectionManagerCallback-914-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 647350 INFO  (zkConnectionManagerCallback-916-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 647901 INFO  (jetty-launcher-903-thread-2) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:40215/solr
   [junit4]   2> 647966 INFO  (zkConnectionManagerCallback-922-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 648055 INFO  (zkConnectionManagerCallback-924-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 648150 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 648151 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:39386_solr
   [junit4]   2> 648165 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.c.Overseer Overseer 
(id=72853228822265862-127.0.0.1:39386_solr-n_0000000000) starting
   [junit4]   2> 648254 INFO  (zkConnectionManagerCallback-931-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 648259 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:40215/solr ready
   [junit4]   2> 648368 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:39386_solr
   [junit4]   2> 648386 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 648401 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating 
transient cache for 2147483647 transient cores
   [junit4]   2> 648402 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40995_solr
   [junit4]   2> 648531 INFO  (zkCallback-915-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 648551 INFO  (zkCallback-930-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 648641 INFO  (zkCallback-915-thread-2) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 648641 INFO  (zkCallback-923-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 648838 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 648896 INFO  (zkConnectionManagerCallback-936-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 648967 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (2)
   [junit4]   2> 648968 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:40215/solr ready
   [junit4]   2> 648968 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 649170 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39386.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 649309 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39386.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 649309 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39386.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 649310 INFO  (jetty-launcher-903-thread-2) 
[n:127.0.0.1:39386_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/.
   [junit4]   2> 649365 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_40995.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 649496 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_40995.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 649497 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_40995.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 649530 INFO  (jetty-launcher-903-thread-1) 
[n:127.0.0.1:40995_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/.
   [junit4]   2> 649815 INFO  (zkConnectionManagerCallback-940-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 649904 INFO  (zkConnectionManagerCallback-945-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 649957 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 650030 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[622F3E9135CCDCD5]-worker) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:40215/solr ready
   [junit4]   2> 650360 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[622F3E9135CCDCD5]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 650508 INFO  (qtp1971084044-3742) [n:127.0.0.1:39386_solr    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
collection.configName=conf1&router.name=implicit&version=2&pullReplicas=1&shards=shard1,shard2&property.customKey=customValue&maxShardsPerNode=-1&router.field=shard_s&name=backuprestore&nrtReplicas=2&action=CREATE&tlogReplicas=0&wt=javabin
 and sendToOCPQueue=true
   [junit4]   2> 650542 INFO  (OverseerThreadFactory-1467-thread-1) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection backuprestore
   [junit4]   2> 650564 WARN  (OverseerThreadFactory-1467-thread-1) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Specified number of replicas of 3 on collection 
backuprestore is higher than the number of Solr instances currently live or 
live and part of your createNodeSet(2). It's unusual to run two replica of the 
same slice on the same Solr-instance.
   [junit4]   2> 650763 INFO  
(OverseerStateUpdate-72853228822265862-127.0.0.1:39386_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"backuprestore_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:40995/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 650782 INFO  
(OverseerStateUpdate-72853228822265862-127.0.0.1:39386_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"backuprestore_shard1_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:39386/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 650799 INFO  
(OverseerStateUpdate-72853228822265862-127.0.0.1:39386_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"backuprestore_shard1_replica_p5",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:40995/solr";,
   [junit4]   2>   "type":"PULL",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 650820 INFO  
(OverseerStateUpdate-72853228822265862-127.0.0.1:39386_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"backuprestore_shard2_replica_n6",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:39386/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 650835 INFO  
(OverseerStateUpdate-72853228822265862-127.0.0.1:39386_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"backuprestore_shard2_replica_n8",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:40995/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 650837 INFO  
(OverseerStateUpdate-72853228822265862-127.0.0.1:39386_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"backuprestore_shard2_replica_p10",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:39386/solr";,
   [junit4]   2>   "type":"PULL",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 651341 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr    
x:backuprestore_shard1_replica_n2] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node4&name=backuprestore_shard1_replica_n2&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 651342 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr    
x:backuprestore_shard1_replica_n2] o.a.s.c.TransientSolrCoreCacheDefault 
Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 651412 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr    
x:backuprestore_shard2_replica_n6] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node9&name=backuprestore_shard2_replica_n6&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 651413 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr    
x:backuprestore_shard2_replica_p10] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=PULL&property.customKey=customValue&coreNodeName=core_node12&name=backuprestore_shard2_replica_p10&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 651708 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr    
x:backuprestore_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=backuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 651786 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr    
x:backuprestore_shard2_replica_n8] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node11&name=backuprestore_shard2_replica_n8&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 651822 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr    
x:backuprestore_shard1_replica_p5] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=PULL&property.customKey=customValue&coreNodeName=core_node7&name=backuprestore_shard1_replica_p5&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 652090 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 652224 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.s.IndexSchema [backuprestore_shard1_replica_n2] Schema name=minimal
   [junit4]   2> 652260 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 652260 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_replica_n2' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 652260 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39386.solr.core.backuprestore.shard1.replica_n2' (registry 
'solr.core.backuprestore.shard1.replica_n2') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 652260 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 652261 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.SolrCore [[backuprestore_shard1_replica_n2] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/backuprestore_shard1_replica_n2],
 
dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/./backuprestore_shard1_replica_n2/data/]
   [junit4]   2> 652688 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 652761 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.s.IndexSchema [backuprestore_shard2_replica_p10] Schema name=minimal
   [junit4]   2> 652853 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 652853 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard2_replica_p10' 
using configuration from collection backuprestore, trusted=true
   [junit4]   2> 652854 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39386.solr.core.backuprestore.shard2.replica_p10' (registry 
'solr.core.backuprestore.shard2.replica_p10') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 652854 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 652854 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.c.SolrCore [[backuprestore_shard2_replica_p10] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/backuprestore_shard2_replica_p10],
 
dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/./backuprestore_shard2_replica_p10/data/]
   [junit4]   2> 652981 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 653320 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 653345 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 653366 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.IndexSchema [backuprestore_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 653404 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 653404 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_replica_n1' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 653432 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.s.IndexSchema [backuprestore_shard2_replica_n6] Schema name=minimal
   [junit4]   2> 653454 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_40995.solr.core.backuprestore.shard1.replica_n1' (registry 
'solr.core.backuprestore.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 653454 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 653454 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrCore [[backuprestore_shard1_replica_n1] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/backuprestore_shard1_replica_n1],
 
dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/./backuprestore_shard1_replica_n1/data/]
   [junit4]   2> 653468 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.s.IndexSchema [backuprestore_shard1_replica_p5] Schema name=minimal
   [junit4]   2> 653504 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 653505 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard2_replica_n6' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 653505 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39386.solr.core.backuprestore.shard2.replica_n6' (registry 
'solr.core.backuprestore.shard2.replica_n6') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 653505 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 653505 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.c.SolrCore [[backuprestore_shard2_replica_n6] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/backuprestore_shard2_replica_n6],
 
dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node2/./backuprestore_shard2_replica_n6/data/]
   [junit4]   2> 653520 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 653539 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 653540 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_replica_p5' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 653540 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_40995.solr.core.backuprestore.shard1.replica_p5' (registry 
'solr.core.backuprestore.shard1.replica_p5') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 653540 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 653540 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.c.SolrCore [[backuprestore_shard1_replica_p5] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/backuprestore_shard1_replica_p5],
 
dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/./backuprestore_shard1_replica_p5/data/]
   [junit4]   2> 653649 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.s.IndexSchema [backuprestore_shard2_replica_n8] Schema name=minimal
   [junit4]   2> 653652 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 653652 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard2_replica_n8' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 653721 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_40995.solr.core.backuprestore.shard2.replica_n8' (registry 
'solr.core.backuprestore.shard2.replica_n8') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@3c5360d2
   [junit4]   2> 653721 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 653722 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.SolrCore [[backuprestore_shard2_replica_n8] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/backuprestore_shard2_replica_n8],
 
dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/tempDir-001/node1/./backuprestore_shard2_replica_n8/data/]
   [junit4]   2> 653819 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 653819 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 653820 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 653820 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 653839 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@148ec7b3[backuprestore_shard1_replica_n2] main]
   [junit4]   2> 653868 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 653869 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 653870 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 653870 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604114743122984960
   [junit4]   2> 653884 INFO  
(searcherExecutor-1476-thread-1-processing-n:127.0.0.1:39386_solr 
x:backuprestore_shard1_replica_n2 c:backuprestore s:shard1 r:core_node4) 
[n:127.0.0.1:39386_solr c:backuprestore s:shard1 r:core_node4 
x:backuprestore_shard1_replica_n2] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n2] Registered new searcher 
Searcher@148ec7b3[backuprestore_shard1_replica_n2] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 653906 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1 to Terms{values={core_node4=0}, 
version=0}
   [junit4]   2> 653923 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
shard shard1: total=2 found=1 timeoutin=9999ms
   [junit4]   2> 654648 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 654648 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 654650 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@133ac16b[backuprestore_shard2_replica_p10] main]
   [junit4]   2> 654663 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 654664 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 654664 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 654666 INFO  
(searcherExecutor-1477-thread-1-processing-n:127.0.0.1:39386_solr 
x:backuprestore_shard2_replica_p10 c:backuprestore s:shard2 r:core_node12) 
[n:127.0.0.1:39386_solr c:backuprestore s:shard2 r:core_node12 
x:backuprestore_shard2_replica_p10] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_p10] Registered new searcher 
Searcher@133ac16b[backuprestore_shard2_replica_p10] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 654929 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 654929 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 654931 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@74fc413[backuprestore_shard1_replica_p5] main]
   [junit4]   2> 655049 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 655049 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 655051 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 655051 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 655061 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 655062 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 655062 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 655084 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.c.ZkController backuprestore_shard1_replica_p5 starting background 
replication from leader
   [junit4]   2> 655086 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@3f40a7b5[backuprestore_shard2_replica_n8] main]
   [junit4]   2> 655122 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 655122 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 655123 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 655173 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.c.ReplicateFromLeader Will start replication from leader with poll 
interval: 00:00:03
   [junit4]   2> 655175 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 655175 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 655176 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 655176 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 655178 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.c.ZkController backuprestore_shard2_replica_p10 starting background 
replication from leader
   [junit4]   2> 655178 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.c.ReplicateFromLeader Will start replication from leader with poll 
interval: 00:00:03
   [junit4]   2> 655195 INFO  
(searcherExecutor-1481-thread-1-processing-n:127.0.0.1:40995_solr 
x:backuprestore_shard1_replica_p5 c:backuprestore s:shard1 r:core_node7) 
[n:127.0.0.1:40995_solr c:backuprestore s:shard1 r:core_node7 
x:backuprestore_shard1_replica_p5] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_p5] Registered new searcher 
Searcher@74fc413[backuprestore_shard1_replica_p5] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 655196 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604114744513396736
   [junit4]   2> 655198 INFO  
(searcherExecutor-1482-thread-1-processing-n:127.0.0.1:40995_solr 
x:backuprestore_shard2_replica_n8 c:backuprestore s:shard2 r:core_node11) 
[n:127.0.0.1:40995_solr c:backuprestore s:shard2 r:core_node11 
x:backuprestore_shard2_replica_n8] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n8] Registered new searcher 
Searcher@3f40a7b5[backuprestore_shard2_replica_n8] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 655199 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@35f43745[backuprestore_shard1_replica_n1] main]
   [junit4]   2> 655211 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 655211 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 655213 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 655213 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 655300 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.h.ReplicationHandler Poll scheduled at an interval of 3000ms
   [junit4]   2> 655301 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 655302 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 655303 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 655303 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 655304 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@788aad7d[backuprestore_shard2_replica_n6] main]
   [junit4]   2> 655317 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.h.ReplicationHandler Poll scheduled at an interval of 3000ms
   [junit4]   2> 655317 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 655317 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604114744640274432
   [junit4]   2> 655356 INFO  
(searcherExecutor-1479-thread-1-processing-n:127.0.0.1:40995_solr 
x:backuprestore_shard1_replica_n1 c:backuprestore s:shard1 r:core_node3) 
[n:127.0.0.1:40995_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n1] Registered new searcher 
Searcher@35f43745[backuprestore_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 655357 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 655357 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 655375 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1 to Terms{values={core_node3=0, 
core_node4=0}, version=1}
   [junit4]   2> 655375 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node11=0}, 
version=0}
   [junit4]   2> 655447 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 655447 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604114744776589312
   [junit4]   2> 655450 INFO  
(searcherExecutor-1480-thread-1-processing-n:127.0.0.1:39386_solr 
x:backuprestore_shard2_replica_n6 c:backuprestore s:shard2 r:core_node9) 
[n:127.0.0.1:39386_solr c:backuprestore s:shard2 r:core_node9 
x:backuprestore_shard2_replica_n6] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n6] Registered new searcher 
Searcher@788aad7d[backuprestore_shard2_replica_n6] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 655464 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
shard shard2: total=2 found=1 timeoutin=9999ms
   [junit4]   2> 655719 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node11=0, 
core_node9=0}, version=1}
   [junit4]   2> 655720 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 655721 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 655721 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/
   [junit4]   2> 655722 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.PeerSync PeerSync: core=backuprestore_shard1_replica_n2 
url=https://127.0.0.1:39386/solr START 
replicas=[https://127.0.0.1:40995/solr/backuprestore_shard1_replica_n1/] 
nUpdates=100
   [junit4]   2> 655739 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.PeerSync PeerSync: core=backuprestore_shard1_replica_n2 
url=https://127.0.0.1:39386/solr DONE.  We have no versions.  sync failed.
   [junit4]   2> 655862 INFO  (qtp1596203610-3734) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.S.Request [backuprestore_shard1_replica_n1]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 655865 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.SyncStrategy Leader's attempt to sync with shard failed, moving to the 
next candidate
   [junit4]   2> 655865 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we 
can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 655865 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 655987 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 655987 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 655987 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:40995/solr/backuprestore_shard2_replica_n8/
   [junit4]   2> 655988 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.PeerSync PeerSync: core=backuprestore_shard2_replica_n8 
url=https://127.0.0.1:40995/solr START 
replicas=[https://127.0.0.1:39386/solr/backuprestore_shard2_replica_n6/] 
nUpdates=100
   [junit4]   2> 655988 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.PeerSync PeerSync: core=backuprestore_shard2_replica_n8 
url=https://127.0.0.1:40995/solr DONE.  We have no versions.  sync failed.
   [junit4]   2> 656014 INFO  (qtp1971084044-3741) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.c.S.Request [backuprestore_shard2_replica_n6]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 656014 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.SyncStrategy Leader's attempt to sync with shard failed, moving to the 
next candidate
   [junit4]   2> 656014 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we 
can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 656015 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 656074 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:40995/solr/backuprestore_shard2_replica_n8/ shard2
   [junit4]   2> 656210 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/ shard1
   [junit4]   2> 656247 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 656248 INFO  (qtp1596203610-3735) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node11&name=backuprestore_shard2_replica_n8&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=4462
   [junit4]   2> 656377 INFO  (zkCallback-915-thread-3) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 656450 INFO  (indexFetcher-1499-thread-1) [    ] 
o.a.s.h.IndexFetcher Replica core_node4 is leader but it's state is down, 
skipping replication
   [junit4]   2> 656463 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node12 x:backuprestore_shard2_replica_p10] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=PULL&property.customKey=customValue&coreNodeName=core_node12&name=backuprestore_shard2_replica_p10&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=5050
   [junit4]   2> 656486 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 656487 INFO  (qtp1596203610-3731) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=backuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=4779
   [junit4]   2> 656523 INFO  (qtp1596203610-3732) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node7 x:backuprestore_shard1_replica_p5] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=PULL&property.customKey=customValue&coreNodeName=core_node7&name=backuprestore_shard1_replica_p5&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=4702
   [junit4]   2> 656613 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node4&name=backuprestore_shard1_replica_n2&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=5271
   [junit4]   2> 656727 INFO  (zkCallback-923-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 656738 INFO  (zkCallback-915-thread-3) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 656763 INFO  (qtp1971084044-3740) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node9&name=backuprestore_shard2_replica_n6&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=5351
   [junit4]   2> 656836 INFO  (zkCallback-923-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 656848 INFO  (zkCallback-915-thread-3) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 656869 INFO  (qtp1971084044-3742) [n:127.0.0.1:39386_solr    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 656869 INFO  (qtp1971084044-3742) [n:127.0.0.1:39386_solr    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={collection.configName=conf1&router.name=implicit&version=2&pullReplicas=1&shards=shard1,shard2&property.customKey=customValue&maxShardsPerNode=-1&router.field=shard_s&name=backuprestore&nrtReplicas=2&action=CREATE&tlogReplicas=0&wt=javabin}
 status=0 QTime=6360
   [junit4]   2> 656998 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.h.IndexFetcher Updated masterUrl to 
https://127.0.0.1:40995/solr/backuprestore_shard2_replica_n8/
   [junit4]   2> 657212 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1 to Terms{values={core_node3=1, 
core_node4=1}, version=2}
   [junit4]   2> 657304 INFO  (qtp1596203610-3728) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.S.Request [backuprestore_shard2_replica_n8]  webapp=/solr 
path=/replication 
params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 
QTime=0
   [junit4]   2> 657392 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.h.IndexFetcher Master's generation: 1
   [junit4]   2> 657392 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.h.IndexFetcher Master's version: 0
   [junit4]   2> 657392 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.h.IndexFetcher Slave's generation: 1
   [junit4]   2> 657392 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.h.IndexFetcher Slave's version: 0
   [junit4]   2> 657392 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.h.IndexFetcher New index in Master. Deleting mine...
   [junit4]   2> 657556 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 657556 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@1c24b1e2 commitCommandVersion:0
   [junit4]   2> 657558 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@53de462d[backuprestore_shard2_replica_p10] main]
   [junit4]   2> 657570 INFO  (indexFetcher-1507-thread-1) [    ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 657588 INFO  (searcherExecutor-1477-thread-1) [    ] 
o.a.s.c.SolrCore [backuprestore_shard2_replica_p10] Registered new searcher 
Searcher@53de462d[backuprestore_shard2_replica_p10] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 657619 INFO  (qtp1596203610-3728) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node11=1, 
core_node9=1}, version=2}
   [junit4]   2> 657730 INFO  (qtp1971084044-3736) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n6]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:40995/solr/backuprestore_shard2_replica_n8/&wt=javabin&version=2}{add=[1
 (1604114746730086400), 3 (1604114746948190208), 4 (1604114746948190209), 6 
(1604114746948190210), 9 (1604114746949238784), 11 (1604114746949238785), 13 
(1604114746949238786), 14 (1604114746949238787), 16 (1604114747046756352), 21 
(1604114747046756353), ... (25 adds)]} 0 126
   [junit4]   2> 657731 INFO  (qtp1596203610-3728) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n8]  
webapp=/solr path=/update 
params={update.distrib=TOLEADER&distrib.from=https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/&wt=javabin&version=2}{add=[1
 (1604114746730086400), 3 (1604114746948190208), 4 (1604114746948190209), 6 
(1604114746948190210), 9 (1604114746949238784), 11 (1604114746949238785), 13 
(1604114746949238786), 14 (1604114746949238787), 16 (1604114747046756352), 21 
(1604114747046756353), ... (25 adds)]} 0 421
   [junit4]   2> 657734 INFO  (qtp1596203610-3729) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n1]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/&wt=javabin&version=2}{add=[0
 (1604114746306461696), 2 (1604114746415513600), 5 (1604114746416562176), 7 
(1604114746416562177), 8 (1604114746416562178), 10 (1604114746416562179), 12 
(1604114746416562180), 15 (1604114746480525312), 17 (1604114746480525313), 18 
(1604114746480525314), ... (29 adds)]} 0 630
   [junit4]   2> 657734 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n2]  
webapp=/solr path=/update params={wt=javabin&version=2}{add=[0 
(1604114746306461696), 1, 2 (1604114746415513600), 3, 4, 5 
(1604114746416562176), 6, 7 (1604114746416562177), 8 (1604114746416562178), 9, 
... (54 adds)]} 0 828
   [junit4]   2> 657806 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604114747250180096,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 657806 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@4f4c9208 
commitCommandVersion:1604114747250180096
   [junit4]   2> 657807 INFO  (qtp1971084044-3737) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604114747251228672,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 657807 INFO  (qtp1971084044-3737) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@5bce67b5 
commitCommandVersion:1604114747251228672
   [junit4]   2> 657807 INFO  (qtp1596203610-3841) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604114747251228672,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 657807 INFO  (qtp1596203610-3841) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@40c39166 
commitCommandVersion:1604114747251228672
   [junit4]   2> 657970 INFO  (qtp1596203610-3840) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604114747422146560,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 657970 INFO  (qtp1596203610-3840) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@1e939d7f 
commitCommandVersion:1604114747422146560
   [junit4]   2> 658188 INFO  (qtp1971084044-3737) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@4129cde7[backuprestore_shard1_replica_n2] main]
   [junit4]   2> 658188 INFO  (qtp1971084044-3737) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 658213 INFO  
(searcherExecutor-1476-thread-1-processing-n:127.0.0.1:39386_solr 
x:backuprestore_shard1_replica_n2 c:backuprestore s:shard1 r:core_node4) 
[n:127.0.0.1:39386_solr c:backuprestore s:shard1 r:core_node4 
x:backuprestore_shard1_replica_n2] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n2] Registered new searcher 
Searcher@4129cde7[backuprestore_shard1_replica_n2] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(8.0.0):C29)))}
   [junit4]   2> 658214 INFO  (qtp1971084044-3737) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n2]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 427
   [junit4]   2> 658229 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@699660f2[backuprestore_shard2_replica_n6] main]
   [junit4]   2> 658230 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 658230 INFO  (qtp1596203610-3840) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@6870dd1e[backuprestore_shard2_replica_n8] main]
   [junit4]   2> 658231 INFO  (qtp1596203610-3840) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 658237 INFO  
(searcherExecutor-1480-thread-1-processing-n:127.0.0.1:39386_solr 
x:backuprestore_shard2_replica_n6 c:backuprestore s:shard2 r:core_node9) 
[n:127.0.0.1:39386_solr c:backuprestore s:shard2 r:core_node9 
x:backuprestore_shard2_replica_n6] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n6] Registered new searcher 
Searcher@699660f2[backuprestore_shard2_replica_n6] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(8.0.0):C25)))}
   [junit4]   2> 658238 INFO  
(searcherExecutor-1482-thread-1-processing-n:127.0.0.1:40995_solr 
x:backuprestore_shard2_replica_n8 c:backuprestore s:shard2 r:core_node11) 
[n:127.0.0.1:40995_solr c:backuprestore s:shard2 r:core_node11 
x:backuprestore_shard2_replica_n8] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n8] Registered new searcher 
Searcher@6870dd1e[backuprestore_shard2_replica_n8] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(8.0.0):C25)))}
   [junit4]   2> 658238 INFO  (qtp1971084044-3816) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard2 r:core_node9 x:backuprestore_shard2_replica_n6] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n6]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 451
   [junit4]   2> 658239 INFO  (qtp1596203610-3840) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n8]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 268
   [junit4]   2> 658244 INFO  (qtp1596203610-3841) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@36de1f79[backuprestore_shard1_replica_n1] main]
   [junit4]   2> 658245 INFO  (qtp1596203610-3841) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 658249 INFO  
(searcherExecutor-1479-thread-1-processing-n:127.0.0.1:40995_solr 
x:backuprestore_shard1_replica_n1 c:backuprestore s:shard1 r:core_node3) 
[n:127.0.0.1:40995_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n1] Registered new searcher 
Searcher@36de1f79[backuprestore_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(8.0.0):C29)))}
   [junit4]   2> 658265 INFO  (qtp1596203610-3841) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n1]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:39386/solr/backuprestore_shard1_replica_n2/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 478
   [junit4]   2> 658297 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n2]  
webapp=/solr path=/update 
params={_stateVer_=backuprestore:9&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=}
 0 561
   [junit4]   2> 658297 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[622F3E9135CCDCD5]) [    ] 
o.a.s.c.a.c.AbstractCloudBackupRestoreTestCase Indexed 54 docs to collection: 
backuprestore
   [junit4]   2> 658340 INFO  (qtp1596203610-3729) [n:127.0.0.1:40995_solr 
c:backuprestore s:shard2 r:core_node11 x:backuprestore_shard2_replica_n8] 
o.a.s.c.S.Request [backuprestore_shard2_replica_n8]  webapp=/solr path=/select 
params={q=*:*&distrib=false&wt=javabin&version=2} hits=25 status=0 QTime=0
   [junit4]   2> 658363 INFO  (qtp1971084044-3741) [n:127.0.0.1:39386_solr 
c:backuprestore s:shard1 r:core_node4 x:backuprestore_shard1_replica_n2] 
o.a.s.c.S.Request [backuprestore_shard1_replica_n2]  webapp=/solr path=/select 
params={q=*:*&distrib=false&wt=javabin&version=2} hits=29 status=0 QTime=0
   [junit4]   2> 658364 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[622F3E9135CCDCD5]) [    ] 
o.a.s.c.a.c.AbstractCloudBackupRestoreTestCase Triggering Backup command
   [junit4]   2> 658366 INFO  (qtp1971084044-3741) [n:127.0.0.1:39386_solr    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :backup with params 
async=507ef60f-b905-4dad-a359-ec1372cf242c&name=mytestbackup&action=BACKUP&location=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/my+backup-001&collection=backuprestore&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 658450 INFO  (qtp1971084044-3741) [n:127.0.0.1:39386_solr 
c:backuprestore   ] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/collections 
params={async=507ef60f-b905-4dad-a359-ec1372cf242c&name=mytestbackup&action=BACKUP&location=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_622F3E9135CCDCD5-001/my+backup-001&collection=backuprestore&wt=javabin&version=2}
 status=0 QTime=84
   [junit4]   2> 658485 INFO  (qtp1971084044-3743) [n:127.0.0.1:39386_solr    ] 
o.a.s.h.a.Collection

[...truncated too long message...]

ngth is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/null1883548495
     [copy] Copying 39 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/null1883548495
   [delete] Deleting directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/null1883548495

resolve-example:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

resolve-server:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/null919524467
     [copy] Copying 247 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/null919524467
   [delete] Deleting directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/null919524467

check-working-copy:
[ivy:cachepath] :: resolving dependencies :: 
org.eclipse.jgit#org.eclipse.jgit-caller;working
[ivy:cachepath]         confs: [default]
[ivy:cachepath]         found 
org.eclipse.jgit#org.eclipse.jgit;4.6.0.201612231935-r in public
[ivy:cachepath]         found com.jcraft#jsch;0.1.53 in public
[ivy:cachepath]         found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath]         found org.apache.httpcomponents#httpclient;4.3.6 in 
public
[ivy:cachepath]         found org.apache.httpcomponents#httpcore;4.3.3 in public
[ivy:cachepath]         found commons-logging#commons-logging;1.1.3 in public
[ivy:cachepath]         found commons-codec#commons-codec;1.6 in public
[ivy:cachepath]         found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 31ms :: artifacts dl 5ms
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   8   |   0   |   0   |   0   ||   8   |   0   |
        ---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[wc-checker] SLF4J: Defaulting to no-operation (NOP) logger implementation
[wc-checker] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for 
further details.
[wc-checker] Checking working copy status...

-jenkins-base:

BUILD SUCCESSFUL
Total time: 595 minutes 13 seconds
Archiving artifacts
WARN: No artifacts found that match the file pattern 
"**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
WARN: java.lang.InterruptedException: no matches found within 10000
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to