Build: https://ci-builds.apache.org/job/Lucene/job/Lucene-Solr-Tests-8.11/583/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.index.hdfs.CheckHdfsIndexTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.index.hdfs.CheckHdfsIndexTest:     1) Thread[id=12676, 
name=Command processor, state=WAITING, group=TGRP-CheckHdfsIndexTest]         
at sun.misc.Unsafe.park(Native Method)         at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)         at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
         at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)     
    at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
         at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.index.hdfs.CheckHdfsIndexTest: 
   1) Thread[id=12676, name=Command processor, state=WAITING, 
group=TGRP-CheckHdfsIndexTest]
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
        at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
        at __randomizedtesting.SeedInfo.seed([5CCE405D7E0D62F3]:0)




Build Log:
[...truncated 14650 lines...]
   [junit4] Suite: org.apache.solr.index.hdfs.CheckHdfsIndexTest
   [junit4]   2> 919200 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.SolrTestCase Setting 'solr.default.confdir' system property to 
test-framework derived value of 
'/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/server/solr/configsets/_default/conf'
   [junit4]   2> 919200 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 919201 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.u.ErrorLogMuter Closing ErrorLogMuter-regex-171 after mutting 0 log 
messages
   [junit4]   2> 919201 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.u.ErrorLogMuter Creating ErrorLogMuter-regex-172 for ERROR logs matching 
regex: ignore_exception
   [junit4]   2> 919201 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.SolrTestCaseJ4 Created dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/data-dir-87-001
   [junit4]   2> 919201 WARN  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=90 numCloses=90
   [junit4]   2> 919202 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 919204 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None)
   [junit4]   2> 919204 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /kv/m
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 919315 WARN  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.h.s.a.s.AuthenticationFilter Unable to initialize FileSignerSecretProvider, 
falling back to use random secrets. Reason: access denied 
("java.io.FilePermission" "/home/jenkins/hadoop-http-auth-signature-secret" 
"read")
   [junit4]   2> 919322 WARN  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 919324 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
   [junit4]   2> 919330 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 919330 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 919330 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 919330 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5baafc69{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,AVAILABLE}
   [junit4]   2> 919461 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@5e789c8e{hdfs,/,file:///home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/jetty-localhost_localdomain-39767-hadoop-hdfs-3_2_4-tests_jar-_-any-1532342880018321433/webapp/,AVAILABLE}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/hdfs}
   [junit4]   2> 919462 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.AbstractConnector Started ServerConnector@64ec0361{HTTP/1.1, 
(http/1.1)}{localhost.localdomain:39767}
   [junit4]   2> 919462 INFO  
(SUITE-CheckHdfsIndexTest-seed#[5CCE405D7E0D62F3]-worker) [     ] 
o.e.j.s.Server Started @919493ms
   [junit4]   2> 919871 WARN  (Listener at localhost.localdomain/34295) [     ] 
o.a.h.s.a.s.AuthenticationFilter Unable to initialize FileSignerSecretProvider, 
falling back to use random secrets. Reason: access denied 
("java.io.FilePermission" "/home/jenkins/hadoop-http-auth-signature-secret" 
"read")
   [junit4]   2> 919876 WARN  (Listener at localhost.localdomain/34295) [     ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 919879 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
   [junit4]   2> 919883 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 919883 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 919883 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 919884 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@40ce2f2d{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,AVAILABLE}
   [junit4]   2> 920017 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@4d8d7f0e{datanode,/,file:///home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/jetty-localhost-45109-hadoop-hdfs-3_2_4-tests_jar-_-any-7075407528543883981/webapp/,AVAILABLE}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/datanode}
   [junit4]   2> 920018 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.AbstractConnector Started ServerConnector@6e20a1da{HTTP/1.1, 
(http/1.1)}{localhost:45109}
   [junit4]   2> 920018 INFO  (Listener at localhost.localdomain/34295) [     ] 
o.e.j.s.Server Started @920049ms
   [junit4]   2> 920065 WARN  (Listener at localhost.localdomain/41749) [     ] 
o.a.h.s.a.s.AuthenticationFilter Unable to initialize FileSignerSecretProvider, 
falling back to use random secrets. Reason: access denied 
("java.io.FilePermission" "/home/jenkins/hadoop-http-auth-signature-secret" 
"read")
   [junit4]   2> 920066 WARN  (Listener at localhost.localdomain/41749) [     ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 920067 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
   [junit4]   2> 920068 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 920069 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 920069 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 920071 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@2fbaeb79{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,AVAILABLE}
   [junit4]   2> 920209 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@1fc23aa5{datanode,/,file:///home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/jetty-localhost-36655-hadoop-hdfs-3_2_4-tests_jar-_-any-261831139971769820/webapp/,AVAILABLE}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/datanode}
   [junit4]   2> 920210 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.AbstractConnector Started ServerConnector@2c0f6bac{HTTP/1.1, 
(http/1.1)}{localhost:36655}
   [junit4]   2> 920210 INFO  (Listener at localhost.localdomain/41749) [     ] 
o.e.j.s.Server Started @920241ms
   [junit4]   2> 920631 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x243525125ed3f33b: Processing first storage report for 
DS-79ae12eb-060d-40ea-a3e1-d7eb1ebecfb1 from datanode 
DatanodeRegistration(127.0.0.1:46689, 
datanodeUuid=ff4efc7e-07b8-475d-b11f-e2c79adcd0b8, infoPort=36333, 
infoSecurePort=0, ipcPort=41749, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735)
   [junit4]   2> 920631 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x243525125ed3f33b: from storage 
DS-79ae12eb-060d-40ea-a3e1-d7eb1ebecfb1 node 
DatanodeRegistration(127.0.0.1:46689, 
datanodeUuid=ff4efc7e-07b8-475d-b11f-e2c79adcd0b8, infoPort=36333, 
infoSecurePort=0, ipcPort=41749, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735), blocks: 
0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
   [junit4]   2> 920631 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x243525125ed3f33b: Processing first storage report for 
DS-fb4c5b31-69f9-4a95-b338-dade8f8dc847 from datanode 
DatanodeRegistration(127.0.0.1:46689, 
datanodeUuid=ff4efc7e-07b8-475d-b11f-e2c79adcd0b8, infoPort=36333, 
infoSecurePort=0, ipcPort=41749, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735)
   [junit4]   2> 920631 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x243525125ed3f33b: from storage 
DS-fb4c5b31-69f9-4a95-b338-dade8f8dc847 node 
DatanodeRegistration(127.0.0.1:46689, 
datanodeUuid=ff4efc7e-07b8-475d-b11f-e2c79adcd0b8, infoPort=36333, 
infoSecurePort=0, ipcPort=41749, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735), blocks: 
0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
   [junit4]   2> 920768 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x54826cde29a1d799: Processing first storage report for 
DS-41024ab7-4b8b-4c3a-a061-0d9e209db8b2 from datanode 
DatanodeRegistration(127.0.0.1:39529, 
datanodeUuid=41ad37dd-25c1-4a65-94c4-29b05d34e975, infoPort=37195, 
infoSecurePort=0, ipcPort=39179, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735)
   [junit4]   2> 920768 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x54826cde29a1d799: from storage 
DS-41024ab7-4b8b-4c3a-a061-0d9e209db8b2 node 
DatanodeRegistration(127.0.0.1:39529, 
datanodeUuid=41ad37dd-25c1-4a65-94c4-29b05d34e975, infoPort=37195, 
infoSecurePort=0, ipcPort=39179, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735), blocks: 
0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
   [junit4]   2> 920768 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x54826cde29a1d799: Processing first storage report for 
DS-344db85b-a39d-4993-a9cc-c68d1a22d0ae from datanode 
DatanodeRegistration(127.0.0.1:39529, 
datanodeUuid=41ad37dd-25c1-4a65-94c4-29b05d34e975, infoPort=37195, 
infoSecurePort=0, ipcPort=39179, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735)
   [junit4]   2> 920768 INFO  (Block report processor) [     ] BlockStateChange 
BLOCK* processReport 0x54826cde29a1d799: from storage 
DS-344db85b-a39d-4993-a9cc-c68d1a22d0ae node 
DatanodeRegistration(127.0.0.1:39529, 
datanodeUuid=41ad37dd-25c1-4a65-94c4-29b05d34e975, infoPort=37195, 
infoSecurePort=0, ipcPort=39179, 
storageInfo=lv=-57;cid=testClusterID;nsid=973757410;c=1708946861735), blocks: 
0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
   [junit4]   2> 920872 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.u.ErrorLogMuter Closing ErrorLogMuter-regex-172 after mutting 0 log 
messages
   [junit4]   2> 920873 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.u.ErrorLogMuter Creating ErrorLogMuter-regex-173 for ERROR logs matching 
regex: ignore_exception
   [junit4]   2> 920874 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 920874 INFO  (ZkTestServer Run Thread) [     ] 
o.a.s.c.ZkTestServer client port: 0.0.0.0/0.0.0.0:0
   [junit4]   2> 920874 INFO  (ZkTestServer Run Thread) [     ] 
o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 920875 WARN  (ZkTestServer Run Thread) [     ] 
o.a.z.s.ServerCnxnFactory maxCnxns is not configured, using default value 0.
   [junit4]   2> 920974 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer start zk server on port: 39063
   [junit4]   2> 920974 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer waitForServerUp: 127.0.0.1:39063
   [junit4]   2> 920974 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:39063
   [junit4]   2> 920974 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1 39063
   [junit4]   2> 920978 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 920980 INFO  (zkConnectionManagerCallback-6914-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 920980 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 920982 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 920983 INFO  (zkConnectionManagerCallback-6916-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 920983 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 920984 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 920985 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 920986 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 920987 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 920988 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 920988 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 920989 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 920990 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 920991 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 920992 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 920993 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkTestServer put 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 920993 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase Will use NRT replicas unless explicitly 
asked otherwise
   [junit4]   2> 921134 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.s.e.JettySolrRunner Start Jetty (configured port=0, binding port=0)
   [junit4]   2> 921134 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 2 ...
   [junit4]   2> 921134 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] o.e.j.s.Server 
jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
   [junit4]   2> 921135 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 921135 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 921135 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 921135 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@51b99b15{/kv/m,null,AVAILABLE}
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.s.AbstractConnector Started ServerConnector@7d40bc73{HTTP/1.1, (http/1.1, 
h2c)}{127.0.0.1:39799}
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] o.e.j.s.Server 
Started @921167ms
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost.localdomain:34295/hdfs__localhost.localdomain_34295__home_jenkins_jenkins-slave_workspace_Lucene_Lucene-Solr-Tests-8.11_solr_build_solr-core_test_J3_temp_solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001_tempDir-002_control_data,
 replicaType=NRT, hostContext=/kv/m, hostPort=39799, 
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/control-001/cores}
   [junit4]   2> 921136 ERROR 
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.11.4
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr
   [junit4]   2> 921136 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2024-02-26T11:27:43.643Z
   [junit4]   2> 921138 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 921138 INFO  (zkConnectionManagerCallback-6918-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 921138 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 921139 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]-SendThread(127.0.0.1:39063))
 [     ] o.a.z.ClientCnxn An exception was thrown while closing send thread for 
session 0x100b829a0730002.
   [junit4]   2>           => EndOfStreamException: Unable to read additional 
data from server sessionid 0x100b829a0730002, likely server has closed socket
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
   [junit4]   2> org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable 
to read additional data from server sessionid 0x100b829a0730002, likely server 
has closed socket
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) 
~[zookeeper-3.6.2.jar:3.6.2]
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 ~[zookeeper-3.6.2.jar:3.6.2]
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275) 
[zookeeper-3.6.2.jar:3.6.2]
   [junit4]   2> 921376 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 921377 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/control-001/solr.xml
   [junit4]   2> 921379 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay 
is ignored
   [junit4]   2> 921379 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 921381 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 922119 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: 
WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
   [junit4]   2> 922120 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.u.s.S.config Trusting all certificates configured for 
Client@6f0270fa[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 922120 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
Client@6f0270fa[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 922123 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.u.s.S.config Trusting all certificates configured for 
Client@49086a2[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 922123 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
Client@49086a2[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 922126 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39063/solr
   [junit4]   2> 922127 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 922127 INFO  (zkConnectionManagerCallback-6929-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 922128 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 922487 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.c.ConnectionManager Waiting for client 
to connect to ZooKeeper
   [junit4]   2> 922487 INFO  (zkConnectionManagerCallback-6931-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 922487 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.c.ConnectionManager Client is connected 
to ZooKeeper
   [junit4]   2> 922564 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.ZkController Contents of zookeeper 
/security.json are world-readable; consider setting up ACLs as described in 
https://solr.apache.org/guide/zookeeper-access-control.html
   [junit4]   2> 922568 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.OverseerElectionContext I am going to 
be the leader 127.0.0.1:39799_kv%2Fm
   [junit4]   2> 922568 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.Overseer Overseer 
(id=72260082962989060-127.0.0.1:39799_kv%2Fm-n_0000000000) starting
   [junit4]   2> 922573 INFO  
(OverseerStateUpdate-72260082962989060-127.0.0.1:39799_kv%2Fm-n_0000000000) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.Overseer Starting to work on the main 
queue : 127.0.0.1:39799_kv%2Fm
   [junit4]   2> 922581 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:39799_kv%2Fm
   [junit4]   2> 922583 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.ZkController non-data nodes now []
   [junit4]   2> 922586 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.p.PackageLoader /packages.json updated to 
version -1
   [junit4]   2> 922586 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.CoreContainer Not all security plugins 
configured!  authentication=disabled authorization=disabled.  Solr is only as 
secure as you make it. Consider configuring authentication/authorization before 
exposing Solr to users internal or external.  See 
https://s.apache.org/solrsecurity for more info
   [junit4]   2> 922593 INFO  (zkCallback-6930-thread-1) [     ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 922662 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 922684 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 922692 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 922692 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 922693 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/control-001/cores
   [junit4]   2> 922713 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 922725 INFO  (zkConnectionManagerCallback-6948-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 922725 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 922729 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 922732 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39063/solr ready
   [junit4]   2> 922733 INFO  (qtp917590667-12775) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:39799/kv/m/admin/collections?action=CREATE&name=control_collection&collection.configName=conf1&createNodeSet=127.0.0.1%3A39799_kv%252Fm&numShards=1&nrtReplicas=1&wt=javabin&version=2)
   [junit4]   2> 922736 INFO  
(OverseerThreadFactory-6938-thread-1-processing-n:127.0.0.1:39799_kv%2Fm) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.a.c.CreateCollectionCmd Create 
collection control_collection
   [junit4]   2> 922845 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:39799/kv/m/admin/cores?null)
   [junit4]   2> 922845 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
x:control_collection_shard1_replica_n1 ] o.a.s.h.a.CoreAdminOperation core 
create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 922846 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
x:control_collection_shard1_replica_n1 ] o.a.s.c.TransientSolrCoreCacheDefault 
Allocating transient core cache for max 4 cores with initial capacity of 4
   [junit4]   2> 923859 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.11.4
   [junit4]   2> 923859 WARN  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.SolrConfig solrconfig.xml: <jmx> is no longer supported, use 
solr.xml:/metrics/reporter section instead
   [junit4]   2> 923868 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.IndexSchema Schema name=test
   [junit4]   2> 923878 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 923897 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' 
using configuration from configset conf1, trusted=true
   [junit4]   2> 923898 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.control_collection.shard1.replica_n1' (registry 
'solr.core.control_collection.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 923898 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost.localdomain:34295/solr_hdfs_home
   [junit4]   2> 923898 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 923898 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore 
at 
[/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/control-001/cores/control_collection_shard1_replica_n1],
 
dataDir=[hdfs://localhost.localdomain:34295/solr_hdfs_home/control_collection/core_node2/data/]
   [junit4]   2> 923899 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost.localdomain:34295/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata
   [junit4]   2> 923920 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 923920 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[33554432] will allocate [1] slabs and use ~[33554432] bytes
   [junit4]   2> 923920 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Creating new global HDFS BlockCache
   [junit4]   2> 923958 WARN  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 923958 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 923959 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost.localdomain:34295/solr_hdfs_home/control_collection/core_node2/data
   [junit4]   2> 923969 WARN  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 923983 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost.localdomain:34295/solr_hdfs_home/control_collection/core_node2/data/index
   [junit4]   2> 923991 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 923991 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[33554432] will allocate [1] slabs and use ~[33554432] bytes
   [junit4]   2> 923993 WARN  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 923993 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 923993 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=40, maxMergeAtOnceExplicit=37, maxMergedSegmentMB=93.83984375, 
floorSegmentMB=1.7158203125, forceMergeDeletesPctAllowed=20.41120810261553, 
segmentsPerTier=21.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0, 
deletesPctAllowed=45.020722263402206
   [junit4]   2> 924033 WARN  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 924094 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 924094 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 924094 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 924113 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 924113 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 924114 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=15, maxMergeAtOnceExplicit=22, maxMergedSegmentMB=22.6298828125, 
floorSegmentMB=1.333984375, forceMergeDeletesPctAllowed=18.29093826449552, 
segmentsPerTier=41.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.40713842322368576, deletesPctAllowed=21.350665628070377
   [junit4]   2> 924144 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 924145 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 924145 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000 ms
   [junit4]   2> 924146 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1791960669647536128
   [junit4]   2> 924150 INFO  
(searcherExecutor-6950-thread-1-processing-n:127.0.0.1:39799_kv%2Fm 
x:control_collection_shard1_replica_n1 c:control_collection s:shard1) 
[n:127.0.0.1:39799_kv%2Fm c:control_collection s:shard1  
x:control_collection_shard1_replica_n1 ] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1]  Registered new searcher autowarm time: 
0 ms
   [junit4]   2> 924153 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/control_collection/terms/shard1 to Terms{values={core_node2=0}, 
version=0}
   [junit4]   2> 924153 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/control_collection/leaders/shard1
   [junit4]   2> 924156 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 924156 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 924156 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:39799/kv/m/control_collection_shard1_replica_n1/
   [junit4]   2> 924156 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 924157 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.SyncStrategy 
http://127.0.0.1:39799/kv/m/control_collection_shard1_replica_n1/ has no 
replicas
   [junit4]   2> 924157 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node 
/collections/control_collection/leaders/shard1/leader after winning as 
/collections/control_collection/leader_elect/shard1/election/72260082962989060-core_node2-n_0000000000
   [junit4]   2> 924158 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:39799/kv/m/control_collection_shard1_replica_n1/ shard1
   [junit4]   2> 924580 INFO  (zkCallback-6930-thread-1) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 924581 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1 ] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 924583 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=1738
   [junit4]   2> 924587 INFO  (qtp917590667-12775) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at 
most 45 seconds. Check all shard replicas
   [junit4]   2> 924683 INFO  (zkCallback-6930-thread-1) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 924683 INFO  (zkCallback-6930-thread-2) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 924684 INFO  (qtp917590667-12775) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:39799_kv%252Fm&wt=javabin&version=2}
 status=0 QTime=1951
   [junit4]   2> 924684 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase Waiting to see 1 active replicas in 
collection: control_collection
   [junit4]   2> 924739 INFO  
(OverseerCollectionConfigSetProcessor-72260082962989060-127.0.0.1:39799_kv%2Fm-n_0000000000)
 [n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist. Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 924810 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 924810 INFO  (zkConnectionManagerCallback-6959-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 924810 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 924813 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 924815 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39063/solr ready
   [junit4]   2> 924815 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection 
loss:false
   [junit4]   2> 924817 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:39799/kv/m/admin/collections?action=CREATE&name=collection1&collection.configName=conf1&createNodeSet=&numShards=1&nrtReplicas=1&stateFormat=2&wt=javabin&version=2)
   [junit4]   2> 924822 INFO  
(OverseerThreadFactory-6938-thread-2-processing-n:127.0.0.1:39799_kv%2Fm) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.a.c.CreateCollectionCmd Create 
collection collection1
   [junit4]   2> 925028 WARN  
(OverseerThreadFactory-6938-thread-2-processing-n:127.0.0.1:39799_kv%2Fm) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.a.c.CreateCollectionCmd It is unusual 
to create a collection (collection1) without cores.
   [junit4]   2> 925030 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at 
most 45 seconds. Check all shard replicas
   [junit4]   2> 925031 INFO  (qtp917590667-12777) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=2&wt=javabin&version=2}
 status=0 QTime=213
   [junit4]   2> 925031 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.SolrCloudTestCase active slice count: 1 expected: 1
   [junit4]   2> 925031 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.SolrCloudTestCase active replica count: 0 expected replica count: 0
   [junit4]   2> 925031 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase Creating jetty instances 
pullReplicaCount=0 numOtherReplicas=1
   [junit4]   2> 925243 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/shard-1-001
 of type NRT for shard1
   [junit4]   2> 925250 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.s.e.JettySolrRunner Start Jetty (configured port=0, binding port=0)
   [junit4]   2> 925250 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 2 ...
   [junit4]   2> 925250 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
   [junit4]   2> 925252 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 925252 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 925253 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 925254 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@63608e59{/kv/m,null,AVAILABLE}
   [junit4]   2> 925254 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.AbstractConnector Started ServerConnector@7629ba29{HTTP/1.1, (http/1.1, 
h2c)}{127.0.0.1:40011}
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.s.Server Started @925288ms
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost.localdomain:34295/hdfs__localhost.localdomain_34295__home_jenkins_jenkins-slave_workspace_Lucene_Lucene-Solr-Tests-8.11_solr_build_solr-core_test_J3_temp_solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001_tempDir-002_jetty1,
 replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/kv/m, hostPort=40011, 
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/shard-1-001/cores}
   [junit4]   2> 925257 ERROR (closeThreadPool-6960-thread-1) [     ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.11.4
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr
   [junit4]   2> 925257 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2024-02-26T11:27:47.764Z
   [junit4]   2> 925259 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 925260 INFO  (zkConnectionManagerCallback-6962-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 925260 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 925261 WARN  
(closeThreadPool-6960-thread-1-SendThread(127.0.0.1:39063)) [     ] 
o.a.z.ClientCnxn An exception was thrown while closing send thread for session 
0x100b829a0730007.
   [junit4]   2>           => EndOfStreamException: Unable to read additional 
data from server sessionid 0x100b829a0730007, likely server has closed socket
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
   [junit4]   2> org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable 
to read additional data from server sessionid 0x100b829a0730007, likely server 
has closed socket
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) 
~[zookeeper-3.6.2.jar:3.6.2]
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
 ~[zookeeper-3.6.2.jar:3.6.2]
   [junit4]   2>        at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275) 
[zookeeper-3.6.2.jar:3.6.2]
   [junit4]   2> 925361 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 925361 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/shard-1-001/solr.xml
   [junit4]   2> 925364 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay 
is ignored
   [junit4]   2> 925364 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 925366 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 926529 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: 
WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
   [junit4]   2> 926530 WARN  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.u.s.S.config Trusting all certificates configured for 
Client@60aeb4f6[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 926530 WARN  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
Client@60aeb4f6[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 926573 WARN  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.u.s.S.config Trusting all certificates configured for 
Client@595d7f0c[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 926573 WARN  (closeThreadPool-6960-thread-1) [     ] 
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
Client@595d7f0c[provider=null,keyStore=null,trustStore=null]
   [junit4]   2> 926574 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39063/solr
   [junit4]   2> 926585 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 927180 INFO  
(OverseerCollectionConfigSetProcessor-72260082962989060-127.0.0.1:39799_kv%2Fm-n_0000000000)
 [n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist. Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 927192 INFO  (zkConnectionManagerCallback-6973-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 927192 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 927324 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.c.ConnectionManager Waiting for client 
to connect to ZooKeeper
   [junit4]   2> 927332 INFO  (zkConnectionManagerCallback-6975-thread-1) [     
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 927332 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.c.ConnectionManager Client is connected 
to ZooKeeper
   [junit4]   2> 927338 WARN  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.ZkController Contents of zookeeper 
/security.json are world-readable; consider setting up ACLs as described in 
https://solr.apache.org/guide/zookeeper-access-control.html
   [junit4]   2> 927339 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 927341 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.ZkController Publish 
node=127.0.0.1:40011_kv%2Fm as DOWN
   [junit4]   2> 927341 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.TransientSolrCoreCacheDefault 
Allocating transient core cache for max 4 cores with initial capacity of 4
   [junit4]   2> 927341 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40011_kv%2Fm
   [junit4]   2> 927342 INFO  (zkCallback-6930-thread-2) [     ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 927343 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.ZkController non-data nodes now []
   [junit4]   2> 927345 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.p.PackageLoader /packages.json updated to 
version -1
   [junit4]   2> 927345 WARN  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.CoreContainer Not all security plugins 
configured!  authentication=disabled authorization=disabled.  Solr is only as 
secure as you make it. Consider configuring authentication/authorization before 
exposing Solr to users internal or external.  See 
https://s.apache.org/solrsecurity for more info
   [junit4]   2> 927373 INFO  (zkCallback-6958-thread-1) [     ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 927373 INFO  (zkCallback-6974-thread-1) [     ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 927408 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 927440 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 927463 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 927463 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 927464 INFO  (closeThreadPool-6960-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/shard-1-001/cores
   [junit4]   2> 927509 INFO  (closeThreadPool-6960-thread-1) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase waitForLiveNode: 127.0.0.1:40011_kv%2Fm
   [junit4]   2> 927537 INFO  (qtp917590667-12775) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:39799/kv/m/admin/collections?action=ADDREPLICA&collection=collection1&shard=shard1&node=127.0.0.1%3A40011_kv%252Fm&type=NRT&wt=javabin&version=2)
   [junit4]   2> 927561 INFO  
(OverseerThreadFactory-6938-thread-3-processing-n:127.0.0.1:39799_kv%2Fm) 
[n:127.0.0.1:39799_kv%2Fm c:collection1 s:shard1   ] o.a.s.c.a.c.AddReplicaCmd 
Node Identified 127.0.0.1:40011_kv%2Fm for creating new replica of shard shard1 
for collection collection1
   [junit4]   2> 927562 INFO  
(OverseerThreadFactory-6938-thread-3-processing-n:127.0.0.1:39799_kv%2Fm) 
[n:127.0.0.1:39799_kv%2Fm c:collection1 s:shard1   ] o.a.s.c.a.c.AddReplicaCmd 
Returning CreateReplica command.
   [junit4]   2> 927612 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm   
  ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:40011/kv/m/admin/cores?null)
   [junit4]   2> 927613 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm   
 x:collection1_shard1_replica_n1 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 928644 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.11.4
   [junit4]   2> 928644 WARN  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.SolrConfig 
solrconfig.xml: <jmx> is no longer supported, use solr.xml:/metrics/reporter 
section instead
   [junit4]   2> 928649 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.s.IndexSchema 
Schema name=test
   [junit4]   2> 928658 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 928673 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_n1' using configuration from 
configset conf1, trusted=true
   [junit4]   2> 928673 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_n1' (registry 
'solr.core.collection1.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@68c8815e
   [junit4]   2> 928673 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost.localdomain:34295/solr_hdfs_home
   [junit4]   2> 928673 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 928673 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.SolrCore 
[[collection1_shard1_replica_n1] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J3/temp/solr.index.hdfs.CheckHdfsIndexTest_5CCE405D7E0D62F3-001/shard-1-001/cores/collection1_shard1_replica_n1],
 
dataDir=[hdfs://localhost.localdomain:34295/solr_hdfs_home/collection1/core_node2/data/]
   [junit4]   2> 928674 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost.localdomain:34295/solr_hdfs_home/collection1/core_node2/data/snapshot_metadata
   [junit4]   2> 928682 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 928682 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[33554432] will allocate [1] slabs and use ~[33554432] bytes
   [junit4]   2> 928685 WARN  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 928685 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 928685 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost.localdomain:34295/solr_hdfs_home/collection1/core_node2/data
   [junit4]   2> 928693 WARN  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 928700 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost.localdomain:34295/solr_hdfs_home/collection1/core_node2/data/index
   [junit4]   2> 928706 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 928706 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[33554432] will allocate [1] slabs and use ~[33554432] bytes
   [junit4]   2> 928708 WARN  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 928708 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 928708 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=40, maxMergeAtOnceExplicit=37, maxMergedSegmentMB=93.83984375, 
floorSegmentMB=1.7158203125, forceMergeDeletesPctAllowed=20.41120810261553, 
segmentsPerTier=21.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0, 
deletesPctAllowed=45.020722263402206
   [junit4]   2> 928727 WARN  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 928772 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 928772 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 928772 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 928783 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 928783 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 928784 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=15, maxMergeAtOnceExplicit=22, maxMergedSegmentMB=22.6298828125, 
floorSegmentMB=1.333984375, forceMergeDeletesPctAllowed=18.29093826449552, 
segmentsPerTier=41.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.40713842322368576, deletesPctAllowed=21.350665628070377
   [junit4]   2> 928791 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 928792 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 928792 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000 ms
   [junit4]   2> 928793 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1791960674520268800
   [junit4]   2> 928798 INFO  
(searcherExecutor-6986-thread-1-processing-n:127.0.0.1:40011_kv%2Fm 
x:collection1_shard1_replica_n1 c:collection1 s:shard1) 
[n:127.0.0.1:40011_kv%2Fm c:collection1 s:shard1  
x:collection1_shard1_replica_n1 ] o.a.s.c.SolrCore 
[collection1_shard1_replica_n1]  Registered new searcher autowarm time: 0 ms
   [junit4]   2> 928800 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.ZkShardTerms 
Successful update of terms at /collections/collection1/terms/shard1 to 
Terms{values={core_node2=0}, version=0}
   [junit4]   2> 928800 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/collection1/leaders/shard1
   [junit4]   2> 928804 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 928804 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 928804 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:40011/kv/m/collection1_shard1_replica_n1/
   [junit4]   2> 928804 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 928804 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.SyncStrategy 
http://127.0.0.1:40011/kv/m/collection1_shard1_replica_n1/ has no replicas
   [junit4]   2> 928804 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node 
/collections/collection1/leaders/shard1/leader after winning as 
/collections/collection1/leader_elect/shard1/election/72260082962989065-core_node2-n_0000000000
   [junit4]   2> 928807 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:40011/kv/m/collection1_shard1_replica_n1/ shard1
   [junit4]   2> 928909 INFO  (zkCallback-6974-thread-1) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 928910 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1  x:collection1_shard1_replica_n1 ] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 928914 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm   
  ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=1302
   [junit4]   2> 928917 INFO  (qtp917590667-12775) [n:127.0.0.1:39799_kv%2Fm 
c:collection1    ] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/collections 
params={node=127.0.0.1:40011_kv%252Fm&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2}
 status=0 QTime=1380
   [junit4]   2> 928917 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase Waiting to see 1 active replicas in 
collection: collection1
   [junit4]   2> 929015 INFO  (zkCallback-6958-thread-1) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 929015 INFO  (zkCallback-6974-thread-2) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 929015 INFO  (zkCallback-6974-thread-1) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 929017 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.SolrTestCaseJ4 ###Starting doTest
   [junit4]   2> 929019 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   2> 929019 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: 
collection1 failOnTimeout: true timeout (sec):
   [junit4]   2> 929019 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractDistribZkTestBase Recoveries finished - collection: collection1
   [junit4]   2> 929020 INFO  (qtp917590667-12774) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:39799/kv/m/control_collection/update?CONTROL=TRUE&wt=javabin&version=2)
   [junit4]   2> 929042 INFO  (qtp917590667-12774) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1 r:core_node2 
x:control_collection_shard1_replica_n1 ] o.a.s.c.ZkShardTerms Successful update 
of terms at /collections/control_collection/terms/shard1 to 
Terms{values={core_node2=1}, version=1}
   [junit4]   2> 929049 INFO  (qtp917590667-12774) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1 r:core_node2 
x:control_collection_shard1_replica_n1 ] o.a.s.u.p.LogUpdateProcessorFactory 
[control_collection_shard1_replica_n1]  webapp=/kv/m path=/update 
params={wt=javabin&version=2&CONTROL=TRUE}{add=[1 (1791960674759344128)]} 0 29
   [junit4]   2> 929051 INFO  (qtp1914056883-12839) [n:127.0.0.1:40011_kv%2Fm   
  ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:40011/kv/m/collection1_shard1_replica_n1/update?wt=javabin&version=2)
   [junit4]   2> 929063 INFO  (qtp1914056883-12839) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/collection1/terms/shard1 to Terms{values={core_node2=1}, version=1}
   [junit4]   2> 929075 INFO  (qtp1914056883-12839) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  
webapp=/kv/m path=/update params={wt=javabin&version=2}{add=[1 
(1791960674791849984)]} 0 23
   [junit4]   2> 929076 INFO  (qtp917590667-12776) [n:127.0.0.1:39799_kv%2Fm    
 ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:39799/kv/m/control_collection/update?null)
   [junit4]   2> 929259 INFO  
(searcherExecutor-6950-thread-1-processing-n:127.0.0.1:39799_kv%2Fm 
x:control_collection_shard1_replica_n1 c:control_collection s:shard1 
r:core_node2) [n:127.0.0.1:39799_kv%2Fm c:control_collection s:shard1 
r:core_node2 x:control_collection_shard1_replica_n1 ] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1]  Registered new searcher autowarm time: 
0 ms
   [junit4]   2> 929260 INFO  (qtp917590667-12776) [n:127.0.0.1:39799_kv%2Fm 
c:control_collection s:shard1 r:core_node2 
x:control_collection_shard1_replica_n1 ] o.a.s.u.p.LogUpdateProcessorFactory 
[control_collection_shard1_replica_n1]  webapp=/kv/m path=/update 
params={waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=}
 0 184
   [junit4]   2> 929261 INFO  (qtp1914056883-12835) [n:127.0.0.1:40011_kv%2Fm   
  ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:40011/kv/m/collection1/update?null)
   [junit4]   2> 929561 INFO  
(OverseerCollectionConfigSetProcessor-72260082962989060-127.0.0.1:39799_kv%2Fm-n_0000000000)
 [n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000004 doesn't exist. Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 929813 INFO  
(searcherExecutor-6986-thread-1-processing-n:127.0.0.1:40011_kv%2Fm 
x:collection1_shard1_replica_n1 c:collection1 s:shard1 r:core_node2) 
[n:127.0.0.1:40011_kv%2Fm c:collection1 s:shard1 r:core_node2 
x:collection1_shard1_replica_n1 ] o.a.s.c.SolrCore 
[collection1_shard1_replica_n1]  Registered new searcher autowarm time: 0 ms
   [junit4]   2> 930220 INFO  (qtp1914056883-12835) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  
webapp=/kv/m path=/update 
params={_stateVer_=collection1:3&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=}
 0 959
   [junit4]   2> 930221 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: 
collection1 failOnTimeout: true timeout (sec):
   [junit4]   2> 930221 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.c.AbstractDistribZkTestBase Recoveries finished - collection: collection1
   [junit4]   2> 930222 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm   
  ] o.a.s.s.HttpSolrCall 
HttpSolrCall.init(http://127.0.0.1:40011/kv/m/collection1/admin/system?qt=%2Fadmin%2Fsystem&wt=javabin&version=2)
   [junit4]   2> 930234 INFO  (qtp1914056883-12837) [n:127.0.0.1:40011_kv%2Fm 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/kv/m 
path=/admin/system params={qt=/admin/system&wt=javabin&version=2} status=0 
QTime=11
   [junit4]   1> 
   [junit4]   1> Opening index @ 
hdfs://localhost.localdomain:34295/solr_hdfs_home/collection1/core_node2/data/index
   [junit4]   1> 
   [junit4]   2> 930255 WARN  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.h.HdfsDirectory HDFS support in Solr has been deprecated as of 8.6. See 
SOLR-14021 for details.
   [junit4]   1> Checking index with threadCount: 4
   [junit4]   1> 0.00% total deletions; 1 documents; 0 deletions
   [junit4]   1> Segments file=segments_2 numSegments=1 version=8.11.4 
id=d3jia8z4kvyijn02pst6rkr9l userData={commitCommandVer=1791960675012050944, 
commitTimeMSec=1708946871769}
   [junit4]   1> 1 of 1: name=_0 maxDoc=1
   [junit4]   1>     version=8.11.4
   [junit4]   1>     id=d3jia8z4kvyijn02pst6rkr9f
   [junit4]   1>     codec=Asserting(Lucene87)
   [junit4]   1>     compound=false
   [junit4]   1>     numFiles=14
   [junit4]   1>     size (MB)=0.006
   [junit4]   1>     diagnostics = {java.vendor=Temurin, os=Linux, 
java.version=1.8.0_362, java.vm.version=25.362-b09, lucene.version=8.11.4, 
os.arch=amd64, java.runtime.version=1.8.0_362-b09, source=flush, 
os.version=4.15.0-213-generic, timestamp=1708946871809}
   [junit4]   1>     no deletions
   [junit4]   1>     test: open reader.........OK [took 0.015 sec]
   [junit4]   1>     test: check integrity.....OK [took 0.004 sec]
   [junit4]   1>     test: check live docs.....OK [took 0.001 sec]
   [junit4]   1>     test: field infos.........OK [21 fields] [took 0.001 sec]
   [junit4]   1>     test: field norms.........OK [7 fields] [took 0.001 sec]
   [junit4]   1>     test: terms, freq, prox...OK [49 terms; 49 terms/docs 
pairs; 49 tokens] [took 0.020 sec]
   [junit4]   1>     test: stored fields.......OK [18 total field count; avg 
18.0 fields per doc] [took 0.002 sec]
   [junit4]   1>     test: term vectors........OK [0 total term vector count; 
avg 0.0 term/freq vector fields per doc] [took 0.001 sec]
   [junit4]   1>     test: docvalues...........OK [5 docvalues fields; 0 
BINARY; 3 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 2 SORTED_SET] [took 0.005 sec]
   [junit4]   1>     test: points..............OK [0 fields, 0 points] [took 
0.001 sec]
   [junit4]   1> 
   [junit4]   1> No problems were detected with this index.
   [junit4]   1> 
   [junit4]   1> Took 0.065 sec total.
   [junit4]   1> 
   [junit4]   2> 930325 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost.localdomain:34295/solr_hdfs_home/collection1/core_node2/data/index
   [junit4]   2> 930326 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost.localdomain:34295/solr
   [junit4]   2> 930326 INFO  
(TEST-CheckHdfsIndexTest.doTest-seed#[5CCE405D7E0D62F3]) [     ] 
o.a.s.SolrTestCaseJ4 ###Ending doTest
   [junit4]   2> 930431 INFO  (closeThreadPool-6993-thread-2) [     ] 
o.a.s.c.CoreContainer Shutting down CoreContainer instance=102657752
   [junit4]   2> 930431 INFO  (closeThreadPool-6993-thread-2) [     ] 
o.a.s.c.ZkController Remove node as live in 
ZooKeeper:/live_nodes/127.0.0.1:39799_kv%2Fm
   [junit4]   2> 930432 INFO  (closeThreadPool-6993-thread-2) [     ] 
o.a.s.c.ZkController Publish this node as DOWN...
   [junit4]   2> 930432 INFO  (closeThreadPool-6993-thread-2) [     ] 
o.a.s.c.ZkController Publish node=127.0.0.1:39799_kv%2Fm as DOWN
   [junit4]   2> 930436 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@312269b8
   [junit4]   2> 930437 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.core.control_collection.shard1.replica_n1 
tag=SolrCore@312269b8
   [junit4]   2> 930437 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@459e4236: rootName = null, 
domain = solr.core.control_collection.shard1.replica_n1, service url = null, 
agent id = null] for registry 
solr.core.control_collection.shard1.replica_n1/com.codahale.metrics.MetricRegistry@344e0d11
   [junit4]   2> 930437 INFO  (closeThreadPool-6993-thread-1) [     ] 
o.a.s.c.CoreContainer Shutting down CoreContainer instance=1852645171
   [junit4]   2> 930437 INFO  (closeThreadPool-6993-thread-1) [     ] 
o.a.s.c.ZkController Remove node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40011_kv%2Fm
   [junit4]   2> 930437 INFO  (closeThreadPool-6993-thread-1) [     ] 
o.a.s.c.ZkController Publish this node as DOWN...
   [junit4]   2> 930437 INFO  (closeThreadPool-6993-thread-1) [     ] 
o.a.s.c.ZkController Publish node=127.0.0.1:40011_kv%2Fm as DOWN
   [junit4]   2> 930444 INFO  (coreCloseExecutor-7001-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.c.SolrCore 
[collection1_shard1_replica_n1]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@4dacc805
   [junit4]   2> 930444 INFO  (coreCloseExecutor-7001-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.core.collection1.shard1.replica_n1 
tag=SolrCore@4dacc805
   [junit4]   2> 930444 INFO  (coreCloseExecutor-7001-thread-1) 
[n:127.0.0.1:40011_kv%2Fm     ] o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@396c163d: rootName = null, 
domain = solr.core.collection1.shard1.replica_n1, service url = null, agent id 
= null] for registry 
solr.core.collection1.shard1.replica_n1/com.codahale.metrics.MetricRegistry@1f62609b
   [junit4]   2> 930458 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.control_collection.shard1.leader 
tag=SolrCore@312269b8
   [junit4]   2> 930460 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.u.DirectUpdateHandler2 Committing on 
IndexWriter.close()  ... SKIPPED (unnecessary).
   [junit4]   2> 930464 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost.localdomain:34295/solr_hdfs_home/control_collection/core_node2/data
   [junit4]   2> 930464 INFO  (coreCloseExecutor-6998-thread-1) 
[n:127.0.0.1:39799_kv%2Fm     ] o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost.localdomain:34295/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata
   [junit4]   2> 930468 INFO  (coreCloseExecutor-7001-thread-1) [n:1

[...truncated too long message...]

.AbstractConnector Stopped ServerConnector@2c0f6bac{HTTP/1.1, 
(http/1.1)}{localhost:0}
   [junit4]   2> 972133 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 972133 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@2fbaeb79{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,STOPPED}
   [junit4]   2> 972139 WARN  (BP-1874801484-127.0.0.1-1708946861735 
heartbeating to localhost.localdomain/127.0.0.1:34295) [     ] 
o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager 
interrupted
   [junit4]   2> 972139 WARN  (BP-1874801484-127.0.0.1-1708946861735 
heartbeating to localhost.localdomain/127.0.0.1:34295) [     ] 
o.a.h.h.s.d.DataNode Ending block pool service for: Block pool 
BP-1874801484-127.0.0.1-1708946861735 (Datanode Uuid 
41ad37dd-25c1-4a65-94c4-29b05d34e975) service to 
localhost.localdomain/127.0.0.1:34295
   [junit4]   2> 972140 ERROR (Command processor) [     ] o.a.h.h.s.d.DataNode 
Command processor encountered interrupt and exit.
   [junit4]   2> 972140 WARN  (Command processor) [     ] o.a.h.h.s.d.DataNode 
Ending command processor service for: Thread[Command 
processor,5,TGRP-CheckHdfsIndexTest]
   [junit4]   2> 972142 WARN  (Listener at localhost.localdomain/39179) [     ] 
o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called
   [junit4]   2> 972178 WARN  (BP-1874801484-127.0.0.1-1708946861735 
heartbeating to localhost.localdomain/127.0.0.1:34295) [     ] 
o.a.h.h.s.d.DataNode Ending block pool service for: Block pool 
BP-1874801484-127.0.0.1-1708946861735 (Datanode Uuid 
ff4efc7e-07b8-475d-b11f-e2c79adcd0b8) service to 
localhost.localdomain/127.0.0.1:34295
   [junit4]   2> 972189 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.w.WebAppContext@4d8d7f0e{datanode,/,null,STOPPED}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/datanode}
   [junit4]   2> 972192 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.AbstractConnector Stopped ServerConnector@6e20a1da{HTTP/1.1, 
(http/1.1)}{localhost:0}
   [junit4]   2> 972192 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 972193 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@40ce2f2d{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,STOPPED}
   [junit4]   2> 972204 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.w.WebAppContext@5e789c8e{hdfs,/,null,STOPPED}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/hdfs}
   [junit4]   2> 972205 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.AbstractConnector Stopped ServerConnector@64ec0361{HTTP/1.1, 
(http/1.1)}{localhost.localdomain:0}
   [junit4]   2> 972205 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.session node0 Stopped scavenging
   [junit4]   2> 972205 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@5baafc69{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,STOPPED}
   [junit4]   2> 972250 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.a.s.u.ErrorLogMuter Closing ErrorLogMuter-regex-180 after mutting 0 log 
messages
   [junit4]   2> 972250 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.a.s.u.ErrorLogMuter Creating ErrorLogMuter-regex-181 for ERROR logs matching 
regex: ignore_exception
   [junit4]   2> 972251 INFO  (Listener at localhost.localdomain/39179) [     ] 
o.a.s.SolrTestCaseJ4 ------------------------------------------------------- 
Done waiting for tracked resources to be released
   [junit4]   2> Feb 26, 2024 11:28:35 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 34 leaked 
thread(s).
   [junit4]   2> Feb 26, 2024 11:28:45 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> SEVERE: 1 thread leaked from SUITE scope at 
org.apache.solr.index.hdfs.CheckHdfsIndexTest: 
   [junit4]   2>    1) Thread[id=12676, name=Command processor, state=WAITING, 
group=TGRP-CheckHdfsIndexTest]
   [junit4]   2>         at sun.misc.Unsafe.park(Native Method)
   [junit4]   2>         at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
   [junit4]   2>         at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
   [junit4]   2>         at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
   [junit4]   2>         at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
   [junit4]   2>         at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
   [junit4]   2> Feb 26, 2024 11:28:45 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
   [junit4]   2> INFO: Starting to interrupt leaked threads:
   [junit4]   2>    1) Thread[id=12676, name=Command processor, state=WAITING, 
group=TGRP-CheckHdfsIndexTest]
   [junit4]   2> 983082 ERROR (Command processor) [     ] o.a.h.h.s.d.DataNode 
Command processor encountered interrupt and exit.
   [junit4]   2> 983082 WARN  (Command processor) [     ] o.a.h.h.s.d.DataNode 
Ending command processor service for: Thread[Command 
processor,5,TGRP-CheckHdfsIndexTest]
   [junit4]   2> Feb 26, 2024 11:28:45 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
   [junit4]   2> INFO: All leaked threads terminated.
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene87), 
sim=Asserting(RandomSimilarity(queryNorm=false): {field=IB SPL-DZ(0.3), 
titleTokenized=DFR I(n)LZ(0.3), body=LM Jelinek-Mercer(0.700000)}), 
locale=es-DO, timezone=America/Creston
   [junit4]   2> NOTE: Linux 4.15.0-213-generic amd64/Temurin 1.8.0_362 
(64-bit)/cpus=4,threads=2,free=186599376,total=502267904
   [junit4]   2> NOTE: All tests run in this JVM: 
[AuthWithShardHandlerFactoryOverrideTest, TestCloudRecovery2, 
BackupRestoreApiErrorConditionsTest, FullHLLTest, TestNoOpRegenerator, 
TestStandardQParsers, PeerSyncWithBufferUpdatesTest, TestSearchPerf, 
HdfsCloudIncrementalBackupTest, TestStressUserVersions, 
TestUseDocValuesAsStored, ShardBackupIdTest, 
PhrasesIdentificationComponentTest, HdfsTlogReplayBufferedWhileIndexingTest, 
TestDelegationWithHadoopAuth, PolyFieldTest, TestAuthenticationFramework, 
TestIBSimilarityFactory, TestLeaderElectionZkExpiry, QueryResultKeyTest, 
ImplicitSnitchTest, TestStreamBody, NoCacheHeaderTest, TestSolrConfigHandler, 
V2CollectionBackupsAPIMappingTest, ZkCLITest, OrderedExecutorTest, TestTrie, 
ManagedSchemaRoundRobinCloudTest, OverseerRolesTest, 
IndexSizeTriggerMixedBoundsTest, DocumentAnalysisRequestHandlerTest, 
TestInitQParser, FileUtilsTest, TestInfoStreamLogging, 
TestFieldCollectionResource, TestSimPolicyCloud, CoreAdminRequestStatusTest, 
AddSchemaFieldsUpdateProcessorFactoryTest, TestSubQueryTransformerDistrib, 
V2StandaloneTest, TestSimGenericDistributedQueue, 
ClassificationUpdateProcessorFactoryTest, TestManagedSynonymGraphFilterFactory, 
TestBooleanSimilarityFactory, BigEndianAscendingWordDeserializerTest, 
MinimalSchemaTest, TestMinMaxOnMultiValuedField, SOLR749Test, 
TestJsonRangeFacets, BJQParserTest, JvmMetricsTest, TestGroupingSearch, 
TestHttpShardHandlerFactory, SparseHLLTest, TriggerIntegrationTest, 
TestAuthorizationFramework, TlogReplayBufferedWhileIndexingTest, 
TestClusterStateMutator, ChaosMonkeySafeLeaderTest, DeleteStatusTest, 
LargeFieldTest, TestSolrCloudWithSecureImpersonation, TestExportWriter, 
TestCSVLoader, TestCSVResponseWriter, FacetPivot2CollectionsTest, 
OutOfBoxZkACLAndCredentialsProvidersTest, QueryParsingTest, 
FieldMutatingUpdateProcessorTest, TestNamedUpdateProcessors, 
ChangedSchemaMergeTest, TestJsonFacetRefinement, TestBulkSchemaAPI, 
TestSolrXml, SolrTestCaseJ4DeleteCoreTest, ComputePlanActionTest, 
ForceLeaderWithTlogReplicasTest, CheckHdfsIndexTest]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CheckHdfsIndexTest 
-Dtests.seed=5CCE405D7E0D62F3 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-DO -Dtests.timezone=America/Creston -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.00s J3 | CheckHdfsIndexTest (suite) <<<
   [junit4]    > Throwable #1: 
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.index.hdfs.CheckHdfsIndexTest: 
   [junit4]    >    1) Thread[id=12676, name=Command processor, state=WAITING, 
group=TGRP-CheckHdfsIndexTest]
   [junit4]    >         at sun.misc.Unsafe.park(Native Method)
   [junit4]    >         at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
   [junit4]    >         at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
   [junit4]    >         at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
   [junit4]    >         at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
   [junit4]    >         at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([5CCE405D7E0D62F3]:0)
   [junit4] Completed [388/959 (1!)] on J3 in 64.20s, 5 tests, 1 error, 1 
skipped <<< FAILURES!

[...truncated 56399 lines...]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to