Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/89/

3 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
We think that split was successful but sub-shard states were not updated even 
after 2 minutes.

Stack Trace:
java.lang.AssertionError: We think that split was successful but sub-shard 
states were not updated even after 2 minutes.
        at 
__randomizedtesting.SeedInfo.seed([45EB8D19C41F0CEA:CECC5EC88519A76E]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:555)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:57751/solr: Solr cloud with available 
number of nodes:2 is insufficient for restoring a collection with 3 shards, 
total replicas per shard 1 and maxShardsPerNode 1. Consider increasing 
maxShardsPerNode value OR number of available nodes.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:57751/solr: Solr cloud with available number of 
nodes:2 is insufficient for restoring a collection with 3 shards, total 
replicas per shard 1 and maxShardsPerNode 1. Consider increasing 
maxShardsPerNode value OR number of available nodes.
        at 
__randomizedtesting.SeedInfo.seed([45EB8D19C41F0CEA:CDBFB2C36AE36112]:0)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
        at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
        at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:320)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at https://127.0.0.1:57700/solr: Solr cloud with available 
number of nodes:2 is insufficient for restoring a collection with 3 shards, 
total replicas per shard 1 and maxShardsPerNode 1. Consider increasing 
maxShardsPerNode value OR number of available nodes.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:57700/solr: Solr cloud with available number 
of nodes:2 is insufficient for restoring a collection with 3 shards, total 
replicas per shard 1 and maxShardsPerNode 1. Consider increasing 
maxShardsPerNode value OR number of available nodes.
        at 
__randomizedtesting.SeedInfo.seed([45EB8D19C41F0CEA:CDBFB2C36AE36112]:0)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
        at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
        at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
        at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:320)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14064 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore
   [junit4]   2> 3362478 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/init-core-data-001
   [junit4]   2> 3362478 WARN  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=24 numCloses=24
   [junit4]   2> 3362478 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 3362480 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 3362481 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001
   [junit4]   2> 3362481 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3362481 INFO  (Thread-6409) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3362481 INFO  (Thread-6409) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3362483 ERROR (Thread-6409) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 3362581 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.c.ZkTestServer start zk server on port:37431
   [junit4]   2> 3362612 INFO  (zkConnectionManagerCallback-7650-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362624 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 3362626 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 3362627 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 3362627 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 3362627 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 3362628 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@75e3b52f{/solr,null,AVAILABLE}
   [junit4]   2> 3362628 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@7b4a9aea{SSL,[ssl, 
http/1.1]}{127.0.0.1:57700}
   [junit4]   2> 3362628 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.e.j.s.Server Started @3362800ms
   [junit4]   2> 3362628 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=57700}
   [junit4]   2> 3362629 ERROR (jetty-launcher-7647-thread-1) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 3362629 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 3362629 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 3362629 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 3362629 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 3362629 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-06-27T04:07:08.432Z
   [junit4]   2> 3362651 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 3362651 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 3362651 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 3362651 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@30e5154e{/solr,null,AVAILABLE}
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@2ef95e37{SSL,[ssl, 
http/1.1]}{127.0.0.1:35513}
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.e.j.s.Server Started @3362823ms
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=35513}
   [junit4]   2> 3362652 ERROR (jetty-launcher-7647-thread-2) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 3362652 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-06-27T04:07:08.455Z
   [junit4]   2> 3362654 INFO  (zkConnectionManagerCallback-7652-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362655 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 3362668 INFO  (zkConnectionManagerCallback-7654-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362668 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 3362816 INFO  (jetty-launcher-7647-thread-1) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37431/solr
   [junit4]   2> 3362819 INFO  (zkConnectionManagerCallback-7658-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362821 INFO  (zkConnectionManagerCallback-7660-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362841 INFO  (jetty-launcher-7647-thread-2) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37431/solr
   [junit4]   2> 3362842 INFO  (zkConnectionManagerCallback-7666-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362845 INFO  (zkConnectionManagerCallback-7668-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362858 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 3362858 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:35513_solr
   [junit4]   2> 3362860 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.c.Overseer Overseer 
(id=74044929022492678-127.0.0.1:35513_solr-n_0000000000) starting
   [junit4]   2> 3362869 INFO  (zkConnectionManagerCallback-7675-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362877 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:37431/solr ready
   [junit4]   2> 3362879 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:35513_solr
   [junit4]   2> 3362900 INFO  (zkCallback-7667-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 3362901 INFO  (zkCallback-7674-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 3362922 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 3362950 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 3362953 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 3362956 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating 
transient cache for 2147483647 transient cores
   [junit4]   2> 3362956 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:57700_solr
   [junit4]   2> 3362957 INFO  (zkCallback-7667-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 3362957 INFO  (zkCallback-7674-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 3362958 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 3362972 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_35513.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3362980 INFO  (zkConnectionManagerCallback-7681-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3362981 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (2)
   [junit4]   2> 3362982 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:37431/solr ready
   [junit4]   2> 3362982 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 3362986 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_35513.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3362986 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_35513.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3362988 INFO  (jetty-launcher-7647-thread-2) 
[n:127.0.0.1:35513_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node2/.
   [junit4]   2> 3363004 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_57700.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3363017 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_57700.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3363017 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_57700.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3363019 INFO  (jetty-launcher-7647-thread-1) 
[n:127.0.0.1:57700_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/.
   [junit4]   2> 3363050 INFO  (zkConnectionManagerCallback-7684-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3363053 INFO  (zkConnectionManagerCallback-7689-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3363054 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 3363055 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[45EB8D19C41F0CEA]-worker) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37431/solr ready
   [junit4]   2> 3363190 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[45EB8D19C41F0CEA]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 3363210 INFO  (qtp1673140579-28122) [n:127.0.0.1:57700_solr    
] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
pullReplicas=0&property.customKey=customValue&collection.configName=conf1&name=backuprestore&nrtReplicas=1&action=CREATE&numShards=2&tlogReplicas=0&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 3363225 INFO  (OverseerThreadFactory-9636-thread-1) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection backuprestore
   [junit4]   2> 3363351 INFO  
(OverseerStateUpdate-74044929022492678-127.0.0.1:35513_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"backuprestore_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:57700/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 3363354 INFO  
(OverseerStateUpdate-74044929022492678-127.0.0.1:35513_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"backuprestore_shard2_replica_n3",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:35513/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 3363573 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node2&name=backuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 3363587 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr    
x:backuprestore_shard2_replica_n3] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node4&name=backuprestore_shard2_replica_n3&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 3363588 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr    
x:backuprestore_shard2_replica_n3] o.a.s.c.TransientSolrCoreCacheDefault 
Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 3364588 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 3364596 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.s.IndexSchema [backuprestore_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 3364599 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 3364600 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_replica_n1' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 3364600 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 3364600 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_57700.solr.core.backuprestore.shard1.replica_n1' (registry 
'solr.core.backuprestore.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3364600 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 3364601 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrCore [[backuprestore_shard1_replica_n1] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/backuprestore_shard1_replica_n1],
 
dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/./backuprestore_shard1_replica_n1/data/]
   [junit4]   2> 3364631 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.s.IndexSchema [backuprestore_shard2_replica_n3] Schema name=minimal
   [junit4]   2> 3364634 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 3364634 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard2_replica_n3' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 3364635 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_35513.solr.core.backuprestore.shard2.replica_n3' (registry 
'solr.core.backuprestore.shard2.replica_n3') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3364635 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 3364635 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.SolrCore [[backuprestore_shard2_replica_n3] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node2/backuprestore_shard2_replica_n3],
 
dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node2/./backuprestore_shard2_replica_n3/data/]
   [junit4]   2> 3364724 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 3364724 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 3364725 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 3364726 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 3364728 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@515ba4c9[backuprestore_shard2_replica_n3] main]
   [junit4]   2> 3364729 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 3364729 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 3364729 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 3364730 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 3364730 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 3364730 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 3364731 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 3364731 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604397228919619584
   [junit4]   2> 3364732 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@1722fb4a[backuprestore_shard1_replica_n1] main]
   [junit4]   2> 3364734 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 3364735 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 3364736 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 3364737 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604397228925911040
   [junit4]   2> 3364739 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node4=0}, 
version=0}
   [junit4]   2> 3364743 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 3364743 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 3364743 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:35513/solr/backuprestore_shard2_replica_n3/
   [junit4]   2> 3364743 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1 to Terms{values={core_node2=0}, 
version=0}
   [junit4]   2> 3364743 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 3364744 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:35513/solr/backuprestore_shard2_replica_n3/ has no replicas
   [junit4]   2> 3364744 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 3364745 INFO  
(searcherExecutor-9646-thread-1-processing-n:127.0.0.1:35513_solr 
x:backuprestore_shard2_replica_n3 c:backuprestore s:shard2 r:core_node4) 
[n:127.0.0.1:35513_solr c:backuprestore s:shard2 r:core_node4 
x:backuprestore_shard2_replica_n3] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n3] Registered new searcher 
Searcher@515ba4c9[backuprestore_shard2_replica_n3] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 3364746 INFO  
(searcherExecutor-9645-thread-1-processing-n:127.0.0.1:57700_solr 
x:backuprestore_shard1_replica_n1 c:backuprestore s:shard1 r:core_node2) 
[n:127.0.0.1:57700_solr c:backuprestore s:shard1 r:core_node2 
x:backuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n1] Registered new searcher 
Searcher@1722fb4a[backuprestore_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 3364748 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 3364748 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 3364748 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:57700/solr/backuprestore_shard1_replica_n1/
   [junit4]   2> 3364749 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 3364750 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:57700/solr/backuprestore_shard1_replica_n1/ has no replicas
   [junit4]   2> 3364750 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 3364752 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:35513/solr/backuprestore_shard2_replica_n3/ shard2
   [junit4]   2> 3364755 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:57700/solr/backuprestore_shard1_replica_n1/ shard1
   [junit4]   2> 3364858 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 3364860 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node2&name=backuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=1286
   [junit4]   2> 3364911 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 3364913 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node4&name=backuprestore_shard2_replica_n3&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=1326
   [junit4]   2> 3364916 INFO  (qtp1673140579-28122) [n:127.0.0.1:57700_solr    
] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 3365017 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3365017 INFO  (zkCallback-7667-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3365230 INFO  
(OverseerCollectionConfigSetProcessor-74044929022492678-127.0.0.1:35513_solr-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 3365916 INFO  (qtp1673140579-28122) [n:127.0.0.1:57700_solr    
] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={pullReplicas=0&property.customKey=customValue&collection.configName=conf1&name=backuprestore&nrtReplicas=1&action=CREATE&numShards=2&tlogReplicas=0&wt=javabin&version=2}
 status=0 QTime=2705
   [junit4]   2> 3365937 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node4=1}, 
version=1}
   [junit4]   2> 3365937 INFO  (qtp771317769-28134) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n3]  
webapp=/solr path=/update params={wt=javabin&version=2}{add=[2 
(1604397230174765056), 3 (1604397230175813632), 5 (1604397230175813633), 6 
(1604397230175813634), 7 (1604397230175813635), 9 (1604397230175813636), 17 
(1604397230175813637), 18 (1604397230175813638), 19 (1604397230175813639), 21 
(1604397230175813640), ... (29 adds)]} 0 8
   [junit4]   2> 3365938 INFO  (qtp1673140579-28118) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1 to Terms{values={core_node2=1}, 
version=1}
   [junit4]   2> 3365938 INFO  (qtp1673140579-28118) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n1]  
webapp=/solr path=/update params={wt=javabin&version=2}{add=[0 
(1604397230172667904), 1 (1604397230174765056), 4 (1604397230174765057), 8 
(1604397230174765058), 10 (1604397230174765059), 11 (1604397230174765060), 12 
(1604397230174765061), 13 (1604397230174765062), 14 (1604397230174765063), 15 
(1604397230174765064), ... (42 adds)]} 0 12
   [junit4]   2> 3365947 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604397230194688000,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 3365948 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@401d8602 
commitCommandVersion:1604397230194688000
   [junit4]   2> 3365956 INFO  (qtp771317769-28133) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604397230204125184,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 3365956 INFO  (qtp771317769-28133) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@52fcb49e 
commitCommandVersion:1604397230204125184
   [junit4]   2> 3365964 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@402a86e6[backuprestore_shard1_replica_n1] main]
   [junit4]   2> 3365965 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 3365966 INFO  
(searcherExecutor-9645-thread-1-processing-n:127.0.0.1:57700_solr 
x:backuprestore_shard1_replica_n1 c:backuprestore s:shard1 r:core_node2) 
[n:127.0.0.1:57700_solr c:backuprestore s:shard1 r:core_node2 
x:backuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n1] Registered new searcher 
Searcher@402a86e6[backuprestore_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.5.0):C42)))}
   [junit4]   2> 3365966 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1 r:core_node2 x:backuprestore_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n1]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:35513/solr/backuprestore_shard2_replica_n3/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 19
   [junit4]   2> 3365979 INFO  (qtp771317769-28133) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@3dccc513[backuprestore_shard2_replica_n3] main]
   [junit4]   2> 3365981 INFO  
(searcherExecutor-9646-thread-1-processing-n:127.0.0.1:35513_solr 
x:backuprestore_shard2_replica_n3 c:backuprestore s:shard2 r:core_node4) 
[n:127.0.0.1:35513_solr c:backuprestore s:shard2 r:core_node4 
x:backuprestore_shard2_replica_n3] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n3] Registered new searcher 
Searcher@3dccc513[backuprestore_shard2_replica_n3] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.5.0):C29)))}
   [junit4]   2> 3365981 INFO  (qtp771317769-28133) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 3365981 INFO  (qtp771317769-28133) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n3]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:35513/solr/backuprestore_shard2_replica_n3/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 25
   [junit4]   2> 3365982 INFO  (qtp771317769-28135) [n:127.0.0.1:35513_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n3] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n3]  
webapp=/solr path=/update 
params={_stateVer_=backuprestore:4&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=}
 0 43
   [junit4]   2> 3365982 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[45EB8D19C41F0CEA]) [    ] 
o.a.s.c.a.c.AbstractCloudBackupRestoreTestCase Indexed 71 docs to collection: 
backuprestore
   [junit4]   2> 3365983 INFO  (qtp1673140579-28121) [n:127.0.0.1:57700_solr    
] o.a.s.h.a.CollectionsHandler Invoked Collection Action :splitshard with 
params 
action=SPLITSHARD&collection=backuprestore&shard=shard1&wt=javabin&version=2 
and sendToOCPQueue=true
   [junit4]   2> 3365986 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Split shard invoked
   [junit4]   2> 3365991 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics 
params={prefix=CONTAINER.fs.usableSpace&wt=javabin&version=2&group=solr.node} 
status=0 QTime=0
   [junit4]   2> 3365992 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr    
] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics 
params={wt=javabin&version=2&key=solr.core.backuprestore.shard1.replica_n1:INDEX.sizeInBytes}
 status=0 QTime=0
   [junit4]   2> 3365995 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Creating slice shard1_0 
of collection backuprestore on 127.0.0.1:57700_solr
   [junit4]   2> 3366097 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3366098 INFO  (zkCallback-7667-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3366996 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Adding replica 
backuprestore_shard1_0_replica_n5 as part of slice shard1_0 of collection 
backuprestore on 127.0.0.1:57700_solr
   [junit4]   2> 3366997 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 
127.0.0.1:57700_solr for creating new replica
   [junit4]   2> 3366999 INFO  
(OverseerStateUpdate-74044929022492678-127.0.0.1:35513_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"addreplica",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1_0",
   [junit4]   2>   "core":"backuprestore_shard1_0_replica_n5",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:57700/solr";,
   [junit4]   2>   "node_name":"127.0.0.1:57700_solr",
   [junit4]   2>   "type":"NRT"} 
   [junit4]   2> 3367101 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367102 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367201 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&name=backuprestore_shard1_0_replica_n5&action=CREATE&collection=backuprestore&shard=shard1_0&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 3367212 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 3367220 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.IndexSchema [backuprestore_shard1_0_replica_n5] Schema name=minimal
   [junit4]   2> 3367223 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 3367223 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_0_replica_n5' 
using configuration from collection backuprestore, trusted=true
   [junit4]   2> 3367223 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_57700.solr.core.backuprestore.shard1_0.replica_n5' (registry 
'solr.core.backuprestore.shard1_0.replica_n5') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3367224 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 3367224 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SolrCore [[backuprestore_shard1_0_replica_n5] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/backuprestore_shard1_0_replica_n5],
 
dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/./backuprestore_shard1_0_replica_n5/data/]
   [junit4]   2> 3367291 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 3367291 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 3367293 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 3367293 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 3367295 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@6d9fe940[backuprestore_shard1_0_replica_n5] main]
   [junit4]   2> 3367299 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 3367299 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 3367300 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 3367301 INFO  
(searcherExecutor-9655-thread-1-processing-n:127.0.0.1:57700_solr 
x:backuprestore_shard1_0_replica_n5 c:backuprestore s:shard1_0 r:core_node7) 
[n:127.0.0.1:57700_solr c:backuprestore s:shard1_0 r:core_node7 
x:backuprestore_shard1_0_replica_n5] o.a.s.c.SolrCore 
[backuprestore_shard1_0_replica_n5] Registered new searcher 
Searcher@6d9fe940[backuprestore_shard1_0_replica_n5] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 3367301 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604397231614459904
   [junit4]   2> 3367303 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, 
tlog=null}
   [junit4]   2> 3367303 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367304 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367306 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1_0 to Terms{values={core_node7=0}, 
version=0}
   [junit4]   2> 3367309 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 3367309 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 3367309 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:57700/solr/backuprestore_shard1_0_replica_n5/
   [junit4]   2> 3367309 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 3367309 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:57700/solr/backuprestore_shard1_0_replica_n5/ has no replicas
   [junit4]   2> 3367310 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 3367313 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:57700/solr/backuprestore_shard1_0_replica_n5/ shard1_0
   [junit4]   2> 3367414 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367415 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367464 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 3367467 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&name=backuprestore_shard1_0_replica_n5&action=CREATE&collection=backuprestore&shard=shard1_0&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=266
   [junit4]   2> 3367467 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Creating slice shard1_1 
of collection backuprestore on 127.0.0.1:57700_solr
   [junit4]   2> 3367570 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3367570 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368468 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Adding replica 
backuprestore_shard1_1_replica_n6 as part of slice shard1_1 of collection 
backuprestore on 127.0.0.1:57700_solr
   [junit4]   2> 3368469 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 
127.0.0.1:57700_solr for creating new replica
   [junit4]   2> 3368471 INFO  
(OverseerStateUpdate-74044929022492678-127.0.0.1:35513_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"addreplica",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1_1",
   [junit4]   2>   "core":"backuprestore_shard1_1_replica_n6",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:57700/solr";,
   [junit4]   2>   "node_name":"127.0.0.1:57700_solr",
   [junit4]   2>   "type":"NRT"} 
   [junit4]   2> 3368573 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368574 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368673 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&name=backuprestore_shard1_1_replica_n6&action=CREATE&collection=backuprestore&shard=shard1_1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 3368682 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 3368690 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.IndexSchema [backuprestore_shard1_1_replica_n6] Schema name=minimal
   [junit4]   2> 3368693 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 3368693 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_1_replica_n6' 
using configuration from collection backuprestore, trusted=true
   [junit4]   2> 3368694 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_57700.solr.core.backuprestore.shard1_1.replica_n6' (registry 
'solr.core.backuprestore.shard1_1.replica_n6') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@31e30f9d
   [junit4]   2> 3368694 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 3368694 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SolrCore [[backuprestore_shard1_1_replica_n6] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/backuprestore_shard1_1_replica_n6],
 
dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_45EB8D19C41F0CEA-001/tempDir-001/node1/./backuprestore_shard1_1_replica_n6/data/]
   [junit4]   2> 3368776 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368776 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368782 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 3368782 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 3368783 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 3368783 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 3368785 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@14e40c15[backuprestore_shard1_1_replica_n6] main]
   [junit4]   2> 3368787 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 3368787 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 3368788 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 3368789 INFO  
(searcherExecutor-9660-thread-1-processing-n:127.0.0.1:57700_solr 
x:backuprestore_shard1_1_replica_n6 c:backuprestore s:shard1_1 r:core_node8) 
[n:127.0.0.1:57700_solr c:backuprestore s:shard1_1 r:core_node8 
x:backuprestore_shard1_1_replica_n6] o.a.s.c.SolrCore 
[backuprestore_shard1_1_replica_n6] Registered new searcher 
Searcher@14e40c15[backuprestore_shard1_1_replica_n6] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 3368789 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604397233174740992
   [junit4]   2> 3368791 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, 
tlog=null}
   [junit4]   2> 3368794 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1_1 to Terms{values={core_node8=0}, 
version=0}
   [junit4]   2> 3368796 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 3368796 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 3368797 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:57700/solr/backuprestore_shard1_1_replica_n6/
   [junit4]   2> 3368797 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 3368797 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:57700/solr/backuprestore_shard1_1_replica_n6/ has no replicas
   [junit4]   2> 3368797 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 3368806 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:57700/solr/backuprestore_shard1_1_replica_n6/ shard1_1
   [junit4]   2> 3368907 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368907 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3368957 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 3368959 INFO  (qtp1673140579-28120) [n:127.0.0.1:57700_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&name=backuprestore_shard1_1_replica_n6&action=CREATE&collection=backuprestore&shard=shard1_1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=287
   [junit4]   2> 3368960 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Asking parent leader to 
wait for: backuprestore_shard1_0_replica_n5 to be alive on: 127.0.0.1:57700_solr
   [junit4]   2> 3368960 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Asking parent leader to 
wait for: backuprestore_shard1_1_replica_n6 to be alive on: 127.0.0.1:57700_solr
   [junit4]   2> 3368961 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.PrepRecoveryOp Going to wait for 
coreNodeName: core_node7, state: active, checkLive: true, onlyIfLeader: true, 
onlyIfLeaderActive: null, maxTime: 183 s
   [junit4]   2> 3368963 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(active): collection=backuprestore, shard=shard1_0, 
thisCore=backuprestore_shard1_0_replica_n5, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=active, 
localState=active, nodeName=127.0.0.1:57700_solr, coreNodeName=core_node7, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node7:{"core":"backuprestore_shard1_0_replica_n5","base_url":"https://127.0.0.1:57700/solr","node_name":"127.0.0.1:57700_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 3368963 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.PrepRecoveryOp Waited 
coreNodeName: core_node7, state: active, checkLive: true, onlyIfLeader: true 
for: 0 seconds.
   [junit4]   2> 3368963 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={nodeName=127.0.0.1:57700_solr&core=backuprestore_shard1_0_replica_n5&qt=/admin/cores&coreNodeName=core_node7&action=PREPRECOVERY&checkLive=true&state=active&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=2
   [junit4]   2> 3368963 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp Going to wait for 
coreNodeName: core_node8, state: active, checkLive: true, onlyIfLeader: true, 
onlyIfLeaderActive: null, maxTime: 183 s
   [junit4]   2> 3368964 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(active): collection=backuprestore, shard=shard1_1, 
thisCore=backuprestore_shard1_1_replica_n6, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=down, 
localState=active, nodeName=127.0.0.1:57700_solr, coreNodeName=core_node8, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node8:{"core":"backuprestore_shard1_1_replica_n6","base_url":"https://127.0.0.1:57700/solr","node_name":"127.0.0.1:57700_solr","state":"down","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 3369060 INFO  (zkCallback-7659-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3369060 INFO  (zkCallback-7667-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 3369964 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(active): collection=backuprestore, shard=shard1_1, 
thisCore=backuprestore_shard1_1_replica_n6, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=active, 
localState=active, nodeName=127.0.0.1:57700_solr, coreNodeName=core_node8, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node8:{"core":"backuprestore_shard1_1_replica_n6","base_url":"https://127.0.0.1:57700/solr","node_name":"127.0.0.1:57700_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 3369964 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp Waited 
coreNodeName: core_node8, state: active, checkLive: true, onlyIfLeader: true 
for: 1 seconds.
   [junit4]   2> 3369964 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={nodeName=127.0.0.1:57700_solr&core=backuprestore_shard1_1_replica_n6&qt=/admin/cores&coreNodeName=core_node8&action=PREPRECOVERY&checkLive=true&state=active&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=1001
   [junit4]   2> 3369965 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Successfully created all 
sub-shards for collection backuprestore parent shard: shard1 on: 
core_node2:{"core":"backuprestore_shard1_replica_n1","base_url":"https://127.0.0.1:57700/solr","node_name":"127.0.0.1:57700_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 3369965 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Splitting shard 
core_node2 as part of slice shard1 of collection backuprestore on 
core_node2:{"core":"backuprestore_shard1_replica_n1","base_url":"https://127.0.0.1:57700/solr","node_name":"127.0.0.1:57700_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 3369966 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.h.a.SplitOp Invoked split action for 
core: backuprestore_shard1_replica_n1
   [junit4]   2> 3369966 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 3369966 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted 
changes. Skipping IW.commit.
   [junit4]   2> 3369966 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 3369966 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partitions=2 segments=1
   [junit4]   2> 3369967 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter Splitting 
_0(7.5.0):C42: 42 documents will move into a sub-shard
   [junit4]   2> 3369967 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partition #0 partitionCount=2 range=80000000-bfffffff
   [junit4]   2> 3369967 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partition #0 partitionCount=2 range=80000000-bfffffff segment #0 segmentCount=1
   [junit4]   2> 3369992 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling 
setCommitData with IW:org.apache.solr.update.SolrIndexWriter@52eac018 
commitCommandVersion:-1
   [junit4]   2> 3369993 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partition #1 partitionCount=2 range=c0000000-ffffffff
   [junit4]   2> 3369993 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partition #1 partitionCount=2 range=c0000000-ffffffff segment #0 segmentCount=1
   [junit4]   2> 3369998 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling 
setCommitData with IW:org.apache.solr.update.SolrIndexWriter@463c71a1 
commitCommandVersion:-1
   [junit4]   2> 3369999 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of 
terms at /collections/backuprestore/terms/shard1_0 to 
Terms{values={core_node7=1}, version=1}
   [junit4]   2> 3370000 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of 
terms at /collections/backuprestore/terms/shard1_1 to 
Terms{values={core_node8=1}, version=1}
   [junit4]   2> 3370000 INFO  (qtp1673140579-28117) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={core=backuprestore_shard1_replica_n1&qt=/admin/cores&action=SPLIT&targetCore=backuprestore_shard1_0_replica_n5&targetCore=backuprestore_shard1_1_replica_n6&wt=javabin&version=2}
 status=0 QTime=35
   [junit4]   2> 3370001 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Index on shard: 
127.0.0.1:57700_solr split into two successfully
   [junit4]   2> 3370001 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Applying buffered updates 
on : backuprestore_shard1_0_replica_n5
   [junit4]   2> 3370001 INFO  (OverseerThreadFactory-9636-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Applying buffered updates 
on : backuprestore_shard1_1_replica_n6
   [junit4]   2> 3370002 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.CoreAdminOperation Applying 
buffered updates on core: backuprestore_shard1_1_replica_n6
   [junit4]   2> 3370002 INFO  (qtp1673140579-28118) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.CoreAdminOperation Applying 
buffered updates on core: backuprestore_shard1_0_replica_n5
   [junit4]   2> 3370002 INFO  (qtp1673140579-28124) [n:127.0.0.1:57700_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.Co

[...truncated too long message...]

owed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/null1615309912
     [copy] Copying 39 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/null1615309912
   [delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/null1615309912

resolve-example:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve-server:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/null314207074
     [copy] Copying 247 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/null314207074
   [delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/null314207074

check-working-copy:
[ivy:cachepath] :: resolving dependencies :: 
org.eclipse.jgit#org.eclipse.jgit-caller;working
[ivy:cachepath]         confs: [default]
[ivy:cachepath]         found 
org.eclipse.jgit#org.eclipse.jgit;4.6.0.201612231935-r in public
[ivy:cachepath]         found com.jcraft#jsch;0.1.53 in public
[ivy:cachepath]         found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath]         found org.apache.httpcomponents#httpclient;4.3.6 in 
public
[ivy:cachepath]         found org.apache.httpcomponents#httpcore;4.3.3 in public
[ivy:cachepath]         found commons-logging#commons-logging;1.1.3 in public
[ivy:cachepath]         found commons-codec#commons-codec;1.6 in public
[ivy:cachepath]         found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 90ms :: artifacts dl 7ms
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   8   |   0   |   0   |   0   ||   8   |   0   |
        ---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[wc-checker] SLF4J: Defaulting to no-operation (NOP) logger implementation
[wc-checker] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for 
further details.
[wc-checker] Checking working copy status...

-jenkins-base:

BUILD SUCCESSFUL
Total time: 165 minutes 42 seconds
Archiving artifacts
WARN: No artifacts found that match the file pattern 
"**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
WARN: java.lang.InterruptedException: no matches found within 10000
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to