Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/139/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseParallelGC
1 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeletedInRaceCondition Error Message: Error from server at https://127.0.0.1:38621/solr: Could not fully remove collection: movereplicatest_coll4 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:38621/solr: Could not fully remove collection: movereplicatest_coll4 at __randomizedtesting.SeedInfo.seed([4270250C0D0814BA:4820AA7ACD08751B]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeletedInRaceCondition(MoveReplicaHDFSFailoverTest.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 14118 lines...] [junit4] Suite: org.apache.solr.cloud.MoveReplicaHDFSFailoverTest [junit4] 2> 1560371 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/init-core-data-001 [junit4] 2> 1560372 WARN (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=6 numCloses=6 [junit4] 2> 1560372 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=true [junit4] 2> 1560374 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 1560375 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in /home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-001 [junit4] 2> 1560375 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 1560375 INFO (Thread-3837) [ ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 1560375 INFO (Thread-3837) [ ] o.a.s.c.ZkTestServer Starting server [junit4] 2> 1560377 ERROR (Thread-3837) [ ] o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit4] 2> 1560475 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.c.ZkTestServer start zk server on port:35221 [junit4] 2> 1560479 INFO (zkConnectionManagerCallback-3498-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1560486 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11 [junit4] 2> 1560487 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11 [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.session node0 Scavenging every 600000ms [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.session node0 Scavenging every 600000ms [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@189d60{/solr,null,AVAILABLE} [junit4] 2> 1560508 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@ae680d{/solr,null,AVAILABLE} [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.AbstractConnector Started ServerConnector@189d02a{SSL,[ssl, http/1.1]}{127.0.0.1:46313} [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.AbstractConnector Started ServerConnector@8ebfbe{SSL,[ssl, http/1.1]}{127.0.0.1:38621} [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.e.j.s.Server Started @1560538ms [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.e.j.s.Server Started @1560538ms [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=46313} [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=38621} [junit4] 2> 1560510 ERROR (jetty-launcher-3495-thread-1) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 1560510 ERROR (jetty-launcher-3495-thread-2) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 7.6.0 [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 7.6.0 [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 1560510 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 1560511 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2018-12-23T08:08:25.227Z [junit4] 2> 1560511 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2018-12-23T08:08:25.228Z [junit4] 2> 1560517 INFO (zkConnectionManagerCallback-3501-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1560518 INFO (zkConnectionManagerCallback-3502-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1560518 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading... [junit4] 2> 1560518 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading... [junit4] 2> 1560527 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [ ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x1001d31808d0001, likely client has closed socket [junit4] 2> 1560913 INFO (jetty-launcher-3495-thread-2) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35221/solr [junit4] 2> 1560914 INFO (zkConnectionManagerCallback-3506-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1560916 INFO (zkConnectionManagerCallback-3508-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1560979 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:38621_solr [junit4] 2> 1560980 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.c.Overseer Overseer (id=72089692485255172-127.0.0.1:38621_solr-n_0000000000) starting [junit4] 2> 1560984 INFO (zkConnectionManagerCallback-3515-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1560985 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35221/solr ready [junit4] 2> 1560986 INFO (OverseerStateUpdate-72089692485255172-127.0.0.1:38621_solr-n_0000000000) [n:127.0.0.1:38621_solr ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:38621_solr [junit4] 2> 1560988 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:38621_solr [junit4] 2> 1561001 INFO (zkCallback-3507-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 1561002 INFO (zkCallback-3514-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 1561006 DEBUG (OverseerAutoScalingTriggerThread-72089692485255172-127.0.0.1:38621_solr-n_0000000000) [ ] o.a.s.c.a.NodeLostTrigger NodeLostTrigger .auto_add_replicas - Initial livenodes: [127.0.0.1:38621_solr] [junit4] 2> 1561012 DEBUG (ScheduledTrigger-6910-thread-1) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1 [junit4] 2> 1561019 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 1561044 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_38621.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1561052 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_38621.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1561053 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_38621.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1561054 INFO (jetty-launcher-3495-thread-2) [n:127.0.0.1:38621_solr ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-001/node2/. [junit4] 2> 1562014 DEBUG (ScheduledTrigger-6910-thread-3) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1 [junit4] 2> 1562289 INFO (jetty-launcher-3495-thread-1) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35221/solr [junit4] 2> 1562290 INFO (zkConnectionManagerCallback-3520-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1562293 INFO (zkConnectionManagerCallback-3522-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1562299 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 1562302 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores [junit4] 2> 1562302 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:46313_solr [junit4] 2> 1562303 INFO (zkCallback-3507-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 1562304 INFO (zkCallback-3521-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 1562304 INFO (zkCallback-3514-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 1562330 INFO (zkConnectionManagerCallback-3529-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1562331 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2) [junit4] 2> 1562332 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35221/solr ready [junit4] 2> 1562334 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 1562350 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_46313.solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1562360 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_46313.solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1562360 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_46313.solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1562362 INFO (jetty-launcher-3495-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-001/node1/. [junit4] 2> 1562390 INFO (zkConnectionManagerCallback-3532-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1562395 INFO (zkConnectionManagerCallback-3537-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1562396 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2) [junit4] 2> 1562397 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35221/solr ready [junit4] 1> Formatting using clusterid: testClusterID [junit4] 2> 1562512 WARN (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties [junit4] 2> 1562521 WARN (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j [junit4] 2> 1562523 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log jetty-6.1.26 [junit4] 2> 1562536 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Extract jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs to ./temp/Jetty_localhost_localdomain_46081_hdfs____32mp8h/webapp [junit4] 2> 1563014 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:46081 [junit4] 2> 1563016 DEBUG (ScheduledTrigger-6910-thread-4) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2 [junit4] 2> 1563126 WARN (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j [junit4] 2> 1563127 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log jetty-6.1.26 [junit4] 2> 1563139 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Extract jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to ./temp/Jetty_localhost_34621_datanode____dz0pgb/webapp [junit4] 2> 1563582 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34621 [junit4] 2> 1563634 WARN (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j [junit4] 2> 1563635 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log jetty-6.1.26 [junit4] 2> 1563647 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Extract jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode to ./temp/Jetty_localhost_45313_datanode____msur17/webapp [junit4] 2> 1563780 ERROR (DataNode: [[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data1/, [DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data2/]] heartbeating to localhost.localdomain/127.0.0.1:43569) [ ] o.a.h.h.s.d.DirectoryScanner dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 [junit4] 2> 1563796 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x1bd75639724c8: from storage DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1 node DatanodeRegistration(127.0.0.1:46851, datanodeUuid=8ff5a187-be2d-435b-810c-df8b67206c6f, infoPort=41063, infoSecurePort=0, ipcPort=45873, storageInfo=lv=-56;cid=testClusterID;nsid=1121442574;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs [junit4] 2> 1563796 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x1bd75639724c8: from storage DS-35bb0c57-eb2d-4410-9619-0e444cca27f4 node DatanodeRegistration(127.0.0.1:46851, datanodeUuid=8ff5a187-be2d-435b-810c-df8b67206c6f, infoPort=41063, infoSecurePort=0, ipcPort=45873, storageInfo=lv=-56;cid=testClusterID;nsid=1121442574;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs [junit4] 2> 1564016 DEBUG (ScheduledTrigger-6910-thread-3) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2 [junit4] 2> 1564179 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45313 [junit4] 2> 1564470 ERROR (DataNode: [[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data3/, [DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data4/]] heartbeating to localhost.localdomain/127.0.0.1:43569) [ ] o.a.h.h.s.d.DirectoryScanner dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000 [junit4] 2> 1564478 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x1bd758c52abdb: from storage DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2 node DatanodeRegistration(127.0.0.1:37925, datanodeUuid=ec6c3b42-f20b-4412-9bd7-75c50b5af405, infoPort=45985, infoSecurePort=0, ipcPort=40607, storageInfo=lv=-56;cid=testClusterID;nsid=1121442574;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs [junit4] 2> 1564479 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x1bd758c52abdb: from storage DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51 node DatanodeRegistration(127.0.0.1:37925, datanodeUuid=ec6c3b42-f20b-4412-9bd7-75c50b5af405, infoPort=45985, infoSecurePort=0, ipcPort=40607, storageInfo=lv=-56;cid=testClusterID;nsid=1121442574;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs [junit4] 2> 1564692 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.SolrTestCaseJ4 ###Starting testOldReplicaIsDeleted [junit4] 2> 1564739 INFO (qtp19430863-15894) [n:127.0.0.1:38621_solr ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params collection.configName=conf1&name=movereplicatest_coll3&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:38621_solr&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 1564743 INFO (OverseerThreadFactory-6912-thread-1-processing-n:127.0.0.1:38621_solr) [n:127.0.0.1:38621_solr ] o.a.s.c.a.c.CreateCollectionCmd Create collection movereplicatest_coll3 [junit4] 2> 1564861 INFO (qtp19430863-16148) [n:127.0.0.1:38621_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 1564909 INFO (qtp32863764-15887) [n:127.0.0.1:46313_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 1564915 INFO (OverseerStateUpdate-72089692485255172-127.0.0.1:38621_solr-n_0000000000) [n:127.0.0.1:38621_solr ] o.a.s.c.o.SliceMutator createReplica() { [junit4] 2> "operation":"ADDREPLICA", [junit4] 2> "collection":"movereplicatest_coll3", [junit4] 2> "shard":"shard1", [junit4] 2> "core":"movereplicatest_coll3_shard1_replica_n1", [junit4] 2> "state":"down", [junit4] 2> "base_url":"https://127.0.0.1:38621/solr", [junit4] 2> "type":"NRT", [junit4] 2> "waitForFinalState":"false"} [junit4] 2> 1565017 DEBUG (ScheduledTrigger-6910-thread-4) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2 [junit4] 2> 1565122 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr x:movereplicatest_coll3_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&coreNodeName=core_node2&collection.configName=conf1&newCollection=true&name=movereplicatest_coll3_shard1_replica_n1&action=CREATE&numShards=1&collection=movereplicatest_coll3&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 1565122 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores [junit4] 2> 1566017 DEBUG (ScheduledTrigger-6910-thread-3) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2 [junit4] 2> 1566142 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.6.0 [junit4] 2> 1566153 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.IndexSchema [movereplicatest_coll3_shard1_replica_n1] Schema name=minimal [junit4] 2> 1566157 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id [junit4] 2> 1566157 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'movereplicatest_coll3_shard1_replica_n1' using configuration from collection movereplicatest_coll3, trusted=true [junit4] 2> 1566157 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_38621.solr.core.movereplicatest_coll3.shard1.replica_n1' (registry 'solr.core.movereplicatest_coll3.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1566164 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://localhost.localdomain:43569/data [junit4] 2> 1566164 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 1566164 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrCore [[movereplicatest_coll3_shard1_replica_n1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-001/node2/movereplicatest_coll3_shard1_replica_n1], dataDir=[hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/] [junit4] 2> 1566167 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/snapshot_metadata [junit4] 2> 1566205 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data [junit4] 2> 1566230 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/index [junit4] 2> 1566294 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|FINALIZED]]} size 0 [junit4] 2> 1566299 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741825_1001 size 69 [junit4] 2> 1566379 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 1566379 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 1566379 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=3 [junit4] 2> 1566394 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 1566394 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 1566460 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@8d4d42[movereplicatest_coll3_shard1_replica_n1] main] [junit4] 2> 1566462 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 1566462 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 1566464 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 1566467 WARN (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1566468 INFO (searcherExecutor-6924-thread-1-processing-n:127.0.0.1:38621_solr x:movereplicatest_coll3_shard1_replica_n1 c:movereplicatest_coll3 s:shard1 r:core_node2) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrCore [movereplicatest_coll3_shard1_replica_n1] Registered new searcher Searcher@8d4d42[movereplicatest_coll3_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 1566468 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1620629269968322560 [junit4] 2> 1566474 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/movereplicatest_coll3/terms/shard1 to Terms{values={core_node2=0}, version=0} [junit4] 2> 1566476 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 1566476 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 1566476 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:38621/solr/movereplicatest_coll3_shard1_replica_n1/ [junit4] 2> 1566477 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 1566477 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SyncStrategy https://127.0.0.1:38621/solr/movereplicatest_coll3_shard1_replica_n1/ has no replicas [junit4] 2> 1566477 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Found all replicas participating in election, clear LIR [junit4] 2> 1566480 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:38621/solr/movereplicatest_coll3_shard1_replica_n1/ shard1 [junit4] 2> 1566582 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 1566585 INFO (qtp19430863-15888) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&coreNodeName=core_node2&collection.configName=conf1&newCollection=true&name=movereplicatest_coll3_shard1_replica_n1&action=CREATE&numShards=1&collection=movereplicatest_coll3&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=1463 [junit4] 2> 1566589 INFO (qtp19430863-15894) [n:127.0.0.1:38621_solr ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 30 seconds. Check all shard replicas [junit4] 2> 1566684 INFO (zkCallback-3507-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/movereplicatest_coll3/state.json] for collection [movereplicatest_coll3] has occurred - updating... (live nodes size: [2]) [junit4] 2> 1566744 INFO (OverseerCollectionConfigSetProcessor-72089692485255172-127.0.0.1:38621_solr-n_0000000000) [n:127.0.0.1:38621_solr ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000000 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 1567017 DEBUG (ScheduledTrigger-6910-thread-4) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 2 [junit4] 2> 1567590 INFO (qtp19430863-15894) [n:127.0.0.1:38621_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={collection.configName=conf1&name=movereplicatest_coll3&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:38621_solr&wt=javabin&version=2} status=0 QTime=2851 [junit4] 2> 1567609 INFO (qtp19430863-15893) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/movereplicatest_coll3/terms/shard1 to Terms{values={core_node2=1}, version=1} [junit4] 2> 1567633 INFO (qtp19430863-15893) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [movereplicatest_coll3_shard1_replica_n1] webapp=/solr path=/update params={wt=javabin&version=2}{add=[1 (1620629271149019136)]} 0 40 [junit4] 2> 1567640 INFO (qtp19430863-15894) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [movereplicatest_coll3_shard1_replica_n1] webapp=/solr path=/update params={wt=javabin&version=2}{add=[2 (1620629271193059328)]} 0 4 [junit4] 2> 1567643 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@8ebfbe{SSL,[ssl, http/1.1]}{127.0.0.1:0} [junit4] 2> 1567644 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.c.CoreContainer Shutting down CoreContainer instance=31056095 [junit4] 2> 1567645 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.node, tag=null [junit4] 2> 1567645 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@1b8d626: rootName = solr_38621, domain = solr.node, service url = null, agent id = null] for registry solr.node / com.codahale.metrics.MetricRegistry@1d53c0d [junit4] 2> 1567654 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jvm, tag=null [junit4] 2> 1567654 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@5619bb: rootName = solr_38621, domain = solr.jvm, service url = null, agent id = null] for registry solr.jvm / com.codahale.metrics.MetricRegistry@b0f23c [junit4] 2> 1567659 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jetty, tag=null [junit4] 2> 1567660 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@184a4a9: rootName = solr_38621, domain = solr.jetty, service url = null, agent id = null] for registry solr.jetty / com.codahale.metrics.MetricRegistry@1d029e2 [junit4] 2> 1567662 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.c.ZkController Remove node as live in ZooKeeper:/live_nodes/127.0.0.1:38621_solr [junit4] 2> 1567662 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.cluster, tag=null [junit4] 2> 1567663 INFO (zkCallback-3521-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1567663 INFO (zkCallback-3514-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1567663 INFO (zkCallback-3528-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1567663 INFO (zkCallback-3507-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1567663 INFO (zkCallback-3536-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1567666 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrCore [movereplicatest_coll3_shard1_replica_n1] CLOSING SolrCore org.apache.solr.core.SolrCore@1688f58 [junit4] 2> 1567666 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.core.movereplicatest_coll3.shard1.replica_n1, tag=1688f58 [junit4] 2> 1567666 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@175f0f2: rootName = solr_38621, domain = solr.core.movereplicatest_coll3.shard1.replica_n1, service url = null, agent id = null] for registry solr.core.movereplicatest_coll3.shard1.replica_n1 / com.codahale.metrics.MetricRegistry@122c21c [junit4] 2> 1567675 WARN (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1567676 WARN (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1567677 WARN (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1567684 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.collection.movereplicatest_coll3.shard1.leader, tag=1688f58 [junit4] 2> 1567684 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 Committing on IndexWriter close. [junit4] 2> 1567684 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@1932ed7 commitCommandVersion:0 [junit4] 2> 1567707 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567709 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741827_1003 size 186 [junit4] 2> 1567734 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567736 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567759 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567760 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741829_1005 size 59 [junit4] 2> 1567765 INFO (zkCallback-3507-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/movereplicatest_coll3/state.json] for collection [movereplicatest_coll3] has occurred - updating... (live nodes size: [1]) [junit4] 2> 1567781 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567782 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741830_1006 size 83 [junit4] 2> 1567804 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567805 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567827 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567828 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741832_1008{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567845 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|FINALIZED]]} size 0 [junit4] 2> 1567845 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|FINALIZED], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|FINALIZED]]} size 0 [junit4] 2> 1567862 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567863 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|FINALIZED]]} size 0 [junit4] 2> 1567881 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567882 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567901 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|FINALIZED]]} size 0 [junit4] 2> 1567902 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741836_1012{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|FINALIZED], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|FINALIZED]]} size 0 [junit4] 2> 1567922 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567923 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741837_1013 size 179 [junit4] 2> 1567942 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567942 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|FINALIZED]]} size 0 [junit4] 2> 1567960 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567960 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e835ce24-a3ca-4dad-a95b-ccbcf422dfe2:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567973 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1567974 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|FINALIZED]]} size 0 [junit4] 2> 1567989 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-35bb0c57-eb2d-4410-9619-0e444cca27f4:NORMAL:127.0.0.1:46851|RBW]]} size 0 [junit4] 2> 1567990 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741841_1017 size 100 [junit4] 2> 1568002 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1568003 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741842_1018{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW], ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW]]} size 0 [junit4] 2> 1568018 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37925 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW]]} size 74 [junit4] 2> 1568018 DEBUG (ScheduledTrigger-6910-thread-3) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1 [junit4] 2> 1568018 DEBUG (ScheduledTrigger-6910-thread-3) [ ] o.a.s.c.a.NodeLostTrigger Tracking lost node: 127.0.0.1:38621_solr [junit4] 2> 1568018 INFO (Block report processor) [ ] BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46851 is added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51:NORMAL:127.0.0.1:37925|RBW], ReplicaUC[[DISK]DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1:NORMAL:127.0.0.1:46851|RBW]]} size 74 [junit4] 2> 1568027 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsDirectory Closing hdfs directory hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data [junit4] 2> 1568028 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsDirectory Closing hdfs directory hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/snapshot_metadata [junit4] 2> 1568028 INFO (coreCloseExecutor-6929-thread-1) [n:127.0.0.1:38621_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsDirectory Closing hdfs directory hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/index [junit4] 2> 1568031 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.c.Overseer Overseer (id=72089692485255172-127.0.0.1:38621_solr-n_0000000000) closing [junit4] 2> 1568031 INFO (OverseerStateUpdate-72089692485255172-127.0.0.1:38621_solr-n_0000000000) [n:127.0.0.1:38621_solr ] o.a.s.c.Overseer Overseer Loop exiting : 127.0.0.1:38621_solr [junit4] 2> 1568029 ERROR (OldIndexDirectoryCleanupThreadForCore-movereplicatest_coll3_shard1_replica_n1) [ ] o.a.s.c.HdfsDirectoryFactory Error checking for old index directories to clean-up. [junit4] 2> java.io.IOException: Filesystem closed [junit4] 2> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2083) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2069) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517) ~[hadoop-common-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557) ~[hadoop-common-2.7.4.jar:?] [junit4] 2> at org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:528) ~[java/:?] [junit4] 2> at org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$21(SolrCore.java:3099) ~[java/:?] [junit4] 2> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172] [junit4] 2> 1568031 WARN (OverseerAutoScalingTriggerThread-72089692485255172-127.0.0.1:38621_solr-n_0000000000) [ ] o.a.s.c.a.OverseerTriggerThread OverseerTriggerThread woken up but we are closed, exiting. [junit4] 2> 1568037 INFO (zkCallback-3521-thread-1) [ ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:46313_solr [junit4] 2> 1568037 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@ae680d{/solr,null,UNAVAILABLE} [junit4] 2> 1568038 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 1568039 INFO (zkCallback-3521-thread-1) [n:127.0.0.1:46313_solr ] o.a.s.c.Overseer Overseer (id=72089692485255175-127.0.0.1:46313_solr-n_0000000001) starting [junit4] 2> 1568048 INFO (qtp32863764-15885) [n:127.0.0.1:46313_solr ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :movereplica with params replica=core_node2&action=MOVEREPLICA&collection=movereplicatest_coll3&targetNode=127.0.0.1:46313_solr&wt=javabin&version=2&inPlaceMove=true and sendToOCPQueue=true [junit4] 2> 1568048 INFO (OverseerStateUpdate-72089692485255175-127.0.0.1:46313_solr-n_0000000001) [n:127.0.0.1:46313_solr ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:46313_solr [junit4] 2> 1568053 INFO (OverseerThreadFactory-6934-thread-1-processing-n:127.0.0.1:46313_solr) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 r:core_node2 ] o.a.s.c.a.c.MoveReplicaCmd Replica will be moved to node 127.0.0.1:46313_solr: core_node2:{"dataDir":"hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/","base_url":"https://127.0.0.1:38621/solr","node_name":"127.0.0.1:38621_solr","type":"NRT","force_set_state":"false","ulogDir":"hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/tlog","core":"movereplicatest_coll3_shard1_replica_n1","shared_storage":"true","state":"down","leader":"true"} [junit4] 2> 1568053 DEBUG (OverseerAutoScalingTriggerThread-72089692485255175-127.0.0.1:46313_solr-n_0000000001) [ ] o.a.s.c.a.NodeLostTrigger NodeLostTrigger .auto_add_replicas - Initial livenodes: [127.0.0.1:46313_solr] [junit4] 2> 1568053 DEBUG (OverseerAutoScalingTriggerThread-72089692485255175-127.0.0.1:46313_solr-n_0000000001) [ ] o.a.s.c.a.NodeLostTrigger Adding lost node from marker path: 127.0.0.1:38621_solr [junit4] 2> 1568053 INFO (OverseerThreadFactory-6934-thread-1-processing-n:127.0.0.1:46313_solr) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 r:core_node2 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:46313_solr for creating new replica of shard shard1 for collection movereplicatest_coll3 [junit4] 2> 1568055 DEBUG (ScheduledTrigger-6932-thread-2) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1 [junit4] 2> 1568065 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr x:movereplicatest_coll3_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&coreNodeName=core_node2&dataDir=hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/&collection.configName=conf1&name=movereplicatest_coll3_shard1_replica_n1&action=CREATE&collection=movereplicatest_coll3&shard=shard1&wt=javabin&version=2&ulogDir=hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/&replicaType=NRT [junit4] 2> 1569057 DEBUG (ScheduledTrigger-6932-thread-3) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1 [junit4] 2> 1569099 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.6.0 [junit4] 2> 1569113 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.IndexSchema [movereplicatest_coll3_shard1_replica_n1] Schema name=minimal [junit4] 2> 1569116 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id [junit4] 2> 1569116 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'movereplicatest_coll3_shard1_replica_n1' using configuration from collection movereplicatest_coll3, trusted=true [junit4] 2> 1569117 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_46313.solr.core.movereplicatest_coll3.shard1.replica_n1' (registry 'solr.core.movereplicatest_coll3.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@ef4b43 [junit4] 2> 1569117 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://localhost.localdomain:43569/data [junit4] 2> 1569117 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 1569117 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrCore [[movereplicatest_coll3_shard1_replica_n1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-001/node1/movereplicatest_coll3_shard1_replica_n1], dataDir=[hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/] [junit4] 2> 1569120 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/snapshot_metadata [junit4] 2> 1569133 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data [junit4] 2> 1569223 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 1569223 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 1569223 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=3 [junit4] 2> 1569235 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.FSHDFSUtils Recovering lease on dfs file hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/tlog/tlog.0000000000000000000 [junit4] 2> 1569334 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 1569334 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 1569337 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/index [junit4] 2> 1569365 INFO (IPC Server handler 8 on 43569) [ ] BlockStateChange BLOCK* addToInvalidates: blk_1073741825_1001 127.0.0.1:37925 127.0.0.1:46851 [junit4] 2> 1569394 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@190117b[movereplicatest_coll3_shard1_replica_n1] main] [junit4] 2> 1569396 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 1569396 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 1569397 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 1569400 WARN (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1569400 INFO (searcherExecutor-6935-thread-1-processing-n:127.0.0.1:46313_solr x:movereplicatest_coll3_shard1_replica_n1 c:movereplicatest_coll3 s:shard1 r:core_node2) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrCore [movereplicatest_coll3_shard1_replica_n1] Registered new searcher Searcher@190117b[movereplicatest_coll3_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.6.0):C2)))} [junit4] 2> 1569412 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 1569412 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 1569412 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:46313/solr/movereplicatest_coll3_shard1_replica_n1/ [junit4] 2> 1569413 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 1569413 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SyncStrategy https://127.0.0.1:46313/solr/movereplicatest_coll3_shard1_replica_n1/ has no replicas [junit4] 2> 1569413 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Found all replicas participating in election, clear LIR [junit4] 2> 1569414 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:46313/solr/movereplicatest_coll3_shard1_replica_n1/ shard1 [junit4] 2> 1569426 WARN (Thread-4084) [ ] o.a.h.h.DFSClient DataStreamer Exception [junit4] 2> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:37925,DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51,DISK], DatanodeInfoWithStorage[127.0.0.1:46851,DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:46851,DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1,DISK], DatanodeInfoWithStorage[127.0.0.1:37925,DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1044) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1107) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1276) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:560) [hadoop-hdfs-2.7.4.jar:?] [junit4] 2> 1569428 ERROR (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.u.HdfsTransactionLog Could not close tlog output [junit4] 2> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:37925,DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51,DISK], DatanodeInfoWithStorage[127.0.0.1:46851,DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:46851,DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1,DISK], DatanodeInfoWithStorage[127.0.0.1:37925,DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1044) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1107) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1276) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:560) ~[hadoop-hdfs-2.7.4.jar:?] [junit4] 2> 1569429 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 1569469 INFO (qtp32863764-15882) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&coreNodeName=core_node2&dataDir=hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/&collection.configName=conf1&name=movereplicatest_coll3_shard1_replica_n1&action=CREATE&collection=movereplicatest_coll3&shard=shard1&wt=javabin&version=2&ulogDir=hdfs://localhost.localdomain:43569/data/movereplicatest_coll3/core_node2/data/&replicaType=NRT} status=0 QTime=1403 [junit4] 2> 1569473 INFO (qtp32863764-15885) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={replica=core_node2&action=MOVEREPLICA&collection=movereplicatest_coll3&targetNode=127.0.0.1:46313_solr&wt=javabin&version=2&inPlaceMove=true} status=0 QTime=1425 [junit4] 2> 1569532 INFO (zkCallback-3521-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/movereplicatest_coll3/state.json] for collection [movereplicatest_coll3] has occurred - updating... (live nodes size: [1]) [junit4] 2> 1569532 INFO (zkCallback-3521-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/movereplicatest_coll3/state.json] for collection [movereplicatest_coll3] has occurred - updating... (live nodes size: [1]) [junit4] 2> 1570057 DEBUG (ScheduledTrigger-6932-thread-4) [ ] o.a.s.c.a.NodeLostTrigger Running NodeLostTrigger: .auto_add_replicas with currently live nodes: 1 [junit4] 2> 1570057 INFO (OverseerCollectionConfigSetProcessor-72089692485255175-127.0.0.1:46313_solr-n_0000000001) [n:127.0.0.1:46313_solr ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000002 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 1570477 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@189d02a{SSL,[ssl, http/1.1]}{127.0.0.1:0} [junit4] 2> 1570480 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.c.CoreContainer Shutting down CoreContainer instance=21311162 [junit4] 2> 1570481 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.node, tag=null [junit4] 2> 1570481 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@b776c7: rootName = solr_46313, domain = solr.node, service url = null, agent id = null] for registry solr.node / com.codahale.metrics.MetricRegistry@173921b [junit4] 2> 1570489 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jvm, tag=null [junit4] 2> 1570489 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@1b5c4d4: rootName = solr_46313, domain = solr.jvm, service url = null, agent id = null] for registry solr.jvm / com.codahale.metrics.MetricRegistry@b0f23c [junit4] 2> 1570493 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jetty, tag=null [junit4] 2> 1570493 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@e5e360: rootName = solr_46313, domain = solr.jetty, service url = null, agent id = null] for registry solr.jetty / com.codahale.metrics.MetricRegistry@1d029e2 [junit4] 2> 1570495 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.c.ZkController Remove node as live in ZooKeeper:/live_nodes/127.0.0.1:46313_solr [junit4] 2> 1570495 INFO (TEST-MoveReplicaHDFSFailoverTest.testOldReplicaIsDeleted-seed#[4270250C0D0814BA]) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.cluster, tag=null [junit4] 2> 1570495 INFO (zkCallback-3536-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0) [junit4] 2> 1570495 INFO (zkCallback-3521-thread-2) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0) [junit4] 2> 1570496 INFO (zkCallback-3528-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0) [junit4] 2> 1570496 INFO (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.c.SolrCore [movereplicatest_coll3_shard1_replica_n1] CLOSING SolrCore org.apache.solr.core.SolrCore@8cb816 [junit4] 2> 1570496 INFO (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.core.movereplicatest_coll3.shard1.replica_n1, tag=8cb816 [junit4] 2> 1570497 INFO (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@15b2be4: rootName = solr_46313, domain = solr.core.movereplicatest_coll3.shard1.replica_n1, service url = null, agent id = null] for registry solr.core.movereplicatest_coll3.shard1.replica_n1 / com.codahale.metrics.MetricRegistry@19e5e3f [junit4] 2> 1570505 WARN (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1570506 WARN (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1570506 WARN (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.s.h.HdfsLocalityReporter Could not retrieve locality information for hdfs://localhost.localdomain:44051/solr3 due to exception: java.net.ConnectException: Call From serv1.sd-datasolutions.de/88.99.242.108 to localhost.localdomain:44051 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused [junit4] 2> 1570513 INFO (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.collection.movereplicatest_coll3.shard1.leader, tag=8cb816 [junit4] 2> 1570516 ERROR (coreCloseExecutor-6940-thread-1) [n:127.0.0.1:46313_solr c:movereplicatest_coll3 s:shard1 r:core_node2 x:movereplicatest_coll3_shard1_replica_n1] o.a.h.h.DFSClient Failed to close inode 16396 [junit4] 2> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:37925,DS-e46c1a56-23aa-4c0a-b0a3-b0642afbff51,DISK], DatanodeInfoWithStorage[127.0.0.1:46851,DS-6ccdc4c0-6a85-4bc9-a6f3-bb7860705fc1,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:46 [...truncated too long message...] alhost:0 [junit4] 2> 1643878 WARN (DataNode: [[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data3/, [DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data4/]] heartbeating to localhost.localdomain/127.0.0.1:43569) [ ] o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager interrupted [junit4] 2> 1643878 WARN (DataNode: [[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data3/, [DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data4/]] heartbeating to localhost.localdomain/127.0.0.1:43569) [ ] o.a.h.h.s.d.DataNode Ending block pool service for: Block pool BP-1489582878-88.99.242.108-1545552507144 (Datanode Uuid ec6c3b42-f20b-4412-9bd7-75c50b5af405) service to localhost.localdomain/127.0.0.1:43569 [junit4] 2> 1643882 WARN (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called [junit4] 2> 1643889 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 [junit4] 2> 1643990 WARN (DataNode: [[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data1/, [DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data2/]] heartbeating to localhost.localdomain/127.0.0.1:43569) [ ] o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager interrupted [junit4] 2> 1643990 WARN (DataNode: [[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data1/, [DISK]file:/home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001/tempDir-002/hdfsBaseDir/data/data2/]] heartbeating to localhost.localdomain/127.0.0.1:43569) [ ] o.a.h.h.s.d.DataNode Ending block pool service for: Block pool BP-1489582878-88.99.242.108-1545552507144 (Datanode Uuid 8ff5a187-be2d-435b-810c-df8b67206c6f) service to localhost.localdomain/127.0.0.1:43569 [junit4] 2> 1643999 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.m.log Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 [junit4] 2> 1644129 INFO (SUITE-MoveReplicaHDFSFailoverTest-seed#[4270250C0D0814BA]-worker) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1:35221 35221 [junit4] 2> NOTE: leaving temporary files on disk at: /home/jenkins/workspace/Lucene-Solr-7.6-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSFailoverTest_4270250C0D0814BA-001 [junit4] 2> Dec 23, 2018 8:09:48 AM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 65 leaked thread(s). [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {id=PostingsFormat(name=MockRandom)}, docValues:{_version_=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=1975, maxMBSortInHeap=5.413038810192915, sim=RandomSimilarity(queryNorm=true): {}, locale=fr-BE, timezone=America/Punta_Arenas [junit4] 2> NOTE: Linux 4.15.0-42-generic i386/Oracle Corporation 1.8.0_172 (32-bit)/cpus=8,threads=3,free=167022784,total=428343296 [junit4] 2> NOTE: All tests run in this JVM: [TestQueryTypes, TestFuzzyAnalyzedSuggestions, TestFilteredDocIdSet, V2ApiIntegrationTest, HdfsUnloadDistributedZkTest, TestAuthorizationFramework, SolrCmdDistributorTest, TestJsonFacetRefinement, TestBinaryField, TestPerFieldSimilarity, TestCoreContainer, ResourceLoaderTest, CoreAdminCreateDiscoverTest, LeaderFailoverAfterPartitionTest, TestMinMaxOnMultiValuedField, TestSizeLimitedDistributedMap, TestSolrCloudWithHadoopAuthPlugin, TestStressLucene, TestHttpShardHandlerFactory, TestUpdate, ClassificationUpdateProcessorIntegrationTest, TestSolrDeletionPolicy2, BigEndianAscendingWordDeserializerTest, TestManagedSchemaThreadSafety, TestTrieFacet, TestManagedStopFilterFactory, OverriddenZkACLAndCredentialsProvidersTest, SpatialFilterTest, RandomizedTaggerTest, TestEmbeddedSolrServerAdminHandler, TestLeaderElectionZkExpiry, QueryEqualityTest, SolrJmxReporterCloudTest, SecurityConfHandlerTest, DistributedIntervalFacetingTest, TestExtendedDismaxParser, ZookeeperStatusHandlerTest, TestLegacyNumericRangeQueryBuilder, PeerSyncTest, TestHdfsCloudBackupRestore, BlockJoinFacetRandomTest, SuggestComponentTest, CollectionsAPISolrJTest, LeaderElectionContextKeyTest, TriggerSetPropertiesIntegrationTest, TestConfigSetImmutable, PhrasesIdentificationComponentTest, TestSha256AuthenticationProvider, TestConfigSetsAPIZkFailure, ParsingFieldUpdateProcessorsTest, SuggesterWFSTTest, ConvertedLegacyTest, AsyncCallRequestStatusResponseTest, TestWriterPerf, TestClusterProperties, TestSimExtremeIndexing, TestRandomRequestDistribution, TestCharFilters, TestFacetMethods, ByteBuffersDirectoryFactoryTest, ImplicitSnitchTest, SpellCheckCollatorTest, TestXIncludeConfig, TestConfigReload, TestChildDocTransformerHierarchy, ResponseHeaderTest, TestRecoveryHdfs, DeleteReplicaTest, StatelessScriptUpdateProcessorFactoryTest, BlockCacheTest, TestDynamicURP, HdfsDirectoryFactoryTest, TestDFISimilarityFactory, CoreAdminRequestStatusTest, TestStressInPlaceUpdates, TestDynamicFieldCollectionResource, TestMiniSolrCloudClusterSSL, TestLMDirichletSimilarityFactory, NodeAddedTriggerTest, HighlighterConfigTest, TestSmileRequest, DirectUpdateHandlerTest, TestSchemaManager, TaggingAttributeTest, TestSimDistribStateManager, TestTolerantUpdateProcessorCloud, CustomCollectionTest, ForceLeaderTest, ManagedSchemaRoundRobinCloudTest, TestCustomStream, JSONWriterTest, TestSolrCoreParser, HLLUtilTest, HttpPartitionTest, UtilsToolTest, TestSchemaVersionResource, TestFieldCacheWithThreads, DateMathParserTest, HdfsRecoverLeaseTest, DirectSolrSpellCheckerTest, SpellingQueryConverterTest, SolrCloudReportersTest, TestMultiWordSynonyms, TestCloudInspectUtil, TestSimNodeAddedTrigger, AlternateDirectoryTest, DebugComponentTest, MultiThreadedOCPTest, TestSimpleTrackingShardHandler, NumericFieldsTest, DirectSolrConnectionTest, SimpleCollectionCreateDeleteTest, TestMergePolicyConfig, StatsReloadRaceTest, DistributedFacetExistsSmallTest, MetricTriggerIntegrationTest, CdcrRequestHandlerTest, TestCloudSchemaless, TestCustomSort, DocValuesMultiTest, SolrXmlInZkTest, PingRequestHandlerTest, TestExactSharedStatsCache, TestJsonFacetsWithNestedObjects, UpdateParamsTest, TestNumericTerms64, TestExclusionRuleCollectionAccess, PreAnalyzedFieldManagedSchemaCloudTest, CleanupOldIndexTest, AnalysisErrorHandlingTest, TestCSVResponseWriter, SpatialHeatmapFacetsTest, ZkNodePropsTest, XsltUpdateRequestHandlerTest, AssignTest, AnalysisAfterCoreReloadTest, BasicFunctionalityTest, EchoParamsTest, MinimalSchemaTest, OutputWriterTest, SampleTest, SolrInfoBeanTest, SolrTestCaseJ4Test, TestDistributedGrouping, TestDocumentBuilder, TestGroupingSearch, TestHighlightDedupGrouping, TestJoin, TestRandomDVFaceting, TestRandomFaceting, TestSolrCoreProperties, TestTolerantSearch, TestTrie, TestWordDelimiterFilterFactory, TestEmbeddedSolrServerConstructors, TestEmbeddedSolrServerSchemaAPI, TestJettySolrRunner, ConnectionReuseTest, ActionThrottleTest, AddReplicaTest, BasicZkTest, ChaosMonkeySafeLeaderTest, CloudExitableDirectoryReaderTest, ClusterStateTest, ClusterStateUpdateTest, ConcurrentCreateRoutedAliasTest, ConfigSetsAPITest, ConnectionManagerTest, CreateCollectionCleanupTest, CreateRoutedAliasTest, DeleteInactiveReplicaTest, DeleteLastCustomShardedReplicaTest, DeleteNodeTest, DistribCursorPagingTest, DistributedQueueTest, DistributedVersionInfoTest, DocValuesNotIndexedTest, LeaderVoteWaitTimeoutTest, LegacyCloudClusterPropTest, MetricsHistoryIntegrationTest, MigrateRouteKeyTest, MissingSegmentRecoveryTest, MoveReplicaHDFSFailoverTest] [junit4] Completed [446/836 (1!)] on J2 in 92.23s, 3 tests, 1 error <<< FAILURES! [...truncated 46632 lines...] [repro] Jenkins log URL: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/139/consoleText [repro] Revision: e1d5761f7b976aa4ab83969f9a699597c0855b3e [repro] Ant options: "-Dargs=-client -XX:+UseParallelGC" [repro] Repro line: ant test -Dtestcase=MoveReplicaHDFSFailoverTest -Dtests.method=testOldReplicaIsDeletedInRaceCondition -Dtests.seed=4270250C0D0814BA -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=fr-BE -Dtests.timezone=America/Punta_Arenas -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro] solr/core [repro] MoveReplicaHDFSFailoverTest [repro] ant compile-test [...truncated 3580 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.MoveReplicaHDFSFailoverTest" -Dtests.showOutput=onerror "-Dargs=-client -XX:+UseParallelGC" -Dtests.seed=4270250C0D0814BA -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=fr-BE -Dtests.timezone=America/Punta_Arenas -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 81 lines...] [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.MoveReplicaHDFSFailoverTest [repro] Exiting with code 0 [...truncated 40 lines...]
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
