Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4716/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseParallelGC
2 tests failed.
FAILED: junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
Error Message:
Some resources were not closed, shutdown, or released.
Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([AFC1F4C965F8BAFE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:234)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
FAILED: junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest
Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog\tlog.0000000000000000000:
java.nio.file.FileSystemException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog\tlog.0000000000000000000:
The process cannot access the file because it is being used by another
process.
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001: java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001: java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001
Stack Trace:
java.io.IOException: Could not remove the following files (in the order of
attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog\tlog.0000000000000000000:
java.nio.file.FileSystemException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog\tlog.0000000000000000000:
The process cannot access the file because it is being used by another process.
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001: java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001: java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:294)
at
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:215)
at
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Build Log:
[...truncated 10208 lines...]
[junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
[junit4] 2> Creating dataDir:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\init-core-data-001
[junit4] 2> 940749 T5571 oas.BaseDistributedSearchTestCase.initHostContext
Setting hostContext system property: /_zz/p
[junit4] 2> 940752 T5571 oasc.ZkTestServer.run STARTING ZK TEST SERVER
[junit4] 2> 940752 T5572 oasc.ZkTestServer$2$1.setClientPort client
port:0.0.0.0/0.0.0.0:0
[junit4] 2> 940752 T5572 oasc.ZkTestServer$ZKServerMain.runFromConfig
Starting server
[junit4] 2> 940842 T5571 oasc.ZkTestServer.run start zk server on
port:54393
[junit4] 2> 940861 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
to /configs/conf1/solrconfig.xml
[junit4] 2> 940865 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\schema.xml
to /configs/conf1/schema.xml
[junit4] 2> 940867 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig.snippet.randomindexconfig.xml
to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
[junit4] 2> 940869 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\stopwords.txt
to /configs/conf1/stopwords.txt
[junit4] 2> 940872 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\protwords.txt
to /configs/conf1/protwords.txt
[junit4] 2> 940875 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\currency.xml
to /configs/conf1/currency.xml
[junit4] 2> 940877 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\enumsConfig.xml
to /configs/conf1/enumsConfig.xml
[junit4] 2> 940879 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\open-exchange-rates.json
to /configs/conf1/open-exchange-rates.json
[junit4] 2> 940882 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\mapping-ISOLatin1Accent.txt
to /configs/conf1/mapping-ISOLatin1Accent.txt
[junit4] 2> 940885 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\old_synonyms.txt
to /configs/conf1/old_synonyms.txt
[junit4] 2> 940887 T5571 oasc.AbstractZkTestCase.putConfig put
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\synonyms.txt
to /configs/conf1/synonyms.txt
[junit4] 2> 941196 T5571 oas.SolrTestCaseJ4.writeCoreProperties Writing
core.properties file to
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1
[junit4] 2> 941200 T5571 oejs.Server.doStart jetty-9.2.10.v20150310
[junit4] 2> 941201 T5571 oejsh.ContextHandler.doStart Started
o.e.j.s.ServletContextHandler@19ef0d73{/_zz/p,null,AVAILABLE}
[junit4] 2> 941206 T5571 oejs.AbstractConnector.doStart Started
ServerConnector@1d5c0652{HTTP/1.1}{127.0.0.1:54401}
[junit4] 2> 941206 T5571 oejs.Server.doStart Started @948710ms
[junit4] 2> 941206 T5571 oascse.JettySolrRunner$1.lifeCycleStarted Jetty
properties: {hostContext=/_zz/p, hostPort=54400,
coreRootDirectory=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores,
solr.data.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\tempDir-001/control/data}
[junit4] 2> 941207 T5571 oass.SolrDispatchFilter.init
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@4e857327
[junit4] 2> 941207 T5571 oasc.SolrResourceLoader.<init> new
SolrResourceLoader for directory:
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\'
[junit4] 2> 941240 T5571 oasc.SolrXmlConfig.fromFile Loading container
configuration from
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\solr.xml
[junit4] 2> 941260 T5571 oasc.CorePropertiesLocator.<init> Config-defined
core root directory:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores
[junit4] 2> 941260 T5571 oasc.CoreContainer.<init> New CoreContainer
1037530738
[junit4] 2> 941260 T5571 oasc.CoreContainer.load Loading cores into
CoreContainer
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\]
[junit4] 2> 941260 T5571 oasc.CoreContainer.load loading shared library:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\lib
[junit4] 2> 941260 T5571 oasc.SolrResourceLoader.addToClassLoader WARN
Can't find (or read) directory to add to classloader: lib (resolved as:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\lib).
[junit4] 2> 941268 T5571 oashc.HttpShardHandlerFactory.init created with
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost :
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize :
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy :
false,useRetries : false,
[junit4] 2> 941272 T5571 oasu.UpdateShardHandler.<init> Creating
UpdateShardHandler HTTP client with params:
socketTimeout=340000&connTimeout=45000&retry=true
[junit4] 2> 941272 T5571 oasl.LogWatcher.createWatcher SLF4J impl is
org.slf4j.impl.Log4jLoggerFactory
[junit4] 2> 941273 T5571 oasl.LogWatcher.newRegisteredLogWatcher
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
[junit4] 2> 941273 T5571 oasc.CoreContainer.load Node Name: 127.0.0.1
[junit4] 2> 941273 T5571 oasc.ZkContainer.initZooKeeper Zookeeper
client=127.0.0.1:54393/solr
[junit4] 2> 941273 T5571 oasc.ZkController.checkChrootPath zkHost includes
chroot
[junit4] 2> 941298 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.ZkController.createEphemeralLiveNode Register node as live in
ZooKeeper:/live_nodes/127.0.0.1:54400__zz%2Fp
[junit4] 2> 941304 T5571 n:127.0.0.1:54400__zz%2Fp oasc.Overseer.close
Overseer (id=null) closing
[junit4] 2> 941306 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.OverseerElectionContext.runLeaderProcess I am going to be the leader
127.0.0.1:54400__zz%2Fp
[junit4] 2> 941307 T5571 n:127.0.0.1:54400__zz%2Fp oasc.Overseer.start
Overseer (id=93864173895942147-127.0.0.1:54400__zz%2Fp-n_0000000000) starting
[junit4] 2> 941315 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.OverseerAutoReplicaFailoverThread.<init> Starting
OverseerAutoReplicaFailoverThread autoReplicaFailoverWorkLoopDelay=10000
autoReplicaFailoverWaitAfterExpiration=30000
autoReplicaFailoverBadNodeExpiration=60000
[junit4] 2> 941316 T5601 n:127.0.0.1:54400__zz%2Fp
oasc.OverseerCollectionProcessor.run Process current queue of collection
creations
[junit4] 2> 941317 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run Starting to work on the main queue
[junit4] 2> 941320 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin used.
[junit4] 2> 941320 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist.
Skipping setup for authorization module.
[junit4] 2> 941321 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.CorePropertiesLocator.discover Looking for core definitions underneath
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores
[junit4] 2> 941322 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {config=solrconfig.xml,
instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1, loadOnStartup=true,
absoluteInstDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\, coreNodeName=,
collection=control_collection, dataDir=data\, transient=false, shard=,
name=collection1, schema=schema.xml}
[junit4] 2> 941322 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\
[junit4] 2> 941322 T5571 n:127.0.0.1:54400__zz%2Fp
oasc.CorePropertiesLocator.discover Found 1 core definitions
[junit4] 2> 941324 T5603 n:127.0.0.1:54400__zz%2Fp c:control_collection
x:collection1 oasc.ZkController.publish publishing core=collection1 state=down
collection=control_collection
[junit4] 2> 941324 T5603 n:127.0.0.1:54400__zz%2Fp c:control_collection
x:collection1 oasc.ZkController.publish numShards not found on descriptor -
reading it from system property
[junit4] 2> 941325 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.ZkController.waitForCoreNodeName look for our core node name
[junit4] 2> 941325 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 941326 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "numShards":"1",
[junit4] 2> "base_url":"http://127.0.0.1:54400/_zz/p",
[junit4] 2> "shard":null,
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"control_collection",
[junit4] 2> "node_name":"127.0.0.1:54400__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "state":"down"} current state version: 0
[junit4] 2> 941326 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Update state numShards=1 message={
[junit4] 2> "numShards":"1",
[junit4] 2> "base_url":"http://127.0.0.1:54400/_zz/p",
[junit4] 2> "shard":null,
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"control_collection",
[junit4] 2> "node_name":"127.0.0.1:54400__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "state":"down"}
[junit4] 2> 941327 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ClusterStateMutator.createCollection building a new cName:
control_collection
[junit4] 2> 941327 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
[junit4] 2> 942216 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for
collection1
[junit4] 2> 942216 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.ZkController.createCollectionZkNode Check for collection
zkNode:control_collection
[junit4] 2> 942217 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.ZkController.createCollectionZkNode Collection zkNode exists
[junit4] 2> 942218 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory:
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\'
[junit4] 2> 942231 T5603 n:127.0.0.1:54400__zz%2Fp oasc.Config.<init>
loaded config solrconfig.xml with version 0
[junit4] 2> 942239 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
[junit4] 2> 942253 T5603 n:127.0.0.1:54400__zz%2Fp oasc.SolrConfig.<init>
Using Lucene MatchVersion: 5.2.0
[junit4] 2> 942284 T5603 n:127.0.0.1:54400__zz%2Fp oasc.SolrConfig.<init>
Loaded SolrConfig: solrconfig.xml
[junit4] 2> 942285 T5603 n:127.0.0.1:54400__zz%2Fp
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
[junit4] 2> 942294 T5603 n:127.0.0.1:54400__zz%2Fp
oass.IndexSchema.readSchema [collection1] Schema name=test
[junit4] 2> 942550 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider.init Initialized with
rates=open-exchange-rates.json, refreshInterval=1440.
[junit4] 2> 942562 T5603 n:127.0.0.1:54400__zz%2Fp
oass.IndexSchema.readSchema default search field in schema is text
[junit4] 2> 942565 T5603 n:127.0.0.1:54400__zz%2Fp
oass.IndexSchema.readSchema unique key field: id
[junit4] 2> 942573 T5603 n:127.0.0.1:54400__zz%2Fp
oass.FileExchangeRateProvider.reload Reloading exchange rates from file
currency.xml
[junit4] 2> 942577 T5603 n:127.0.0.1:54400__zz%2Fp
oass.FileExchangeRateProvider.reload Reloading exchange rates from file
currency.xml
[junit4] 2> 942583 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from
open-exchange-rates.json
[junit4] 2> 942584 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key
IMPORTANT NOTE
[junit4] 2> 942585 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key,
got STRING
[junit4] 2> 942585 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from
open-exchange-rates.json
[junit4] 2> 942586 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key
IMPORTANT NOTE
[junit4] 2> 942586 T5603 n:127.0.0.1:54400__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key,
got STRING
[junit4] 2> 942586 T5603 n:127.0.0.1:54400__zz%2Fp
oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration
from collection control_collection
[junit4] 2> 942586 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
[junit4] 2> 942586 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at
[C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\], dataDir=[null]
[junit4] 2> 942586 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@2db36e6f
[junit4] 2> 942587 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.CachingDirectoryFactory.get return new directory for
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\data\
[junit4] 2> 942587 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.getNewIndexDir New index directory detected: old=null
new=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\data\index/
[junit4] 2> 942587 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.initIndex WARN [collection1] Solr index directory
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\data\index' doesn't exist.
Creating new index...
[junit4] 2> 942588 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.CachingDirectoryFactory.get return new directory for
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\control-001\cores\collection1\data\index
[junit4] 2> 942588 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy:
minMergeSize=1000, mergeFactor=14, maxMergeSize=9223372036854775807,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.8856516091129182]
[junit4] 2> 942589 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
[junit4] 2>
commit{dir=MockDirectoryWrapper(RAMDirectory@63ba3fa5
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@58a89cc9),segFN=segments_1,generation=1}
[junit4] 2> 942589 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
[junit4] 2> 942593 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"nodistrib"
[junit4] 2> 942595 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"dedupe"
[junit4] 2> 942595 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
[junit4] 2> 942595 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"stored_sig"
[junit4] 2> 942595 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
[junit4] 2> 942596 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"distrib-dup-test-chain-explicit"
[junit4] 2> 942596 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"distrib-dup-test-chain-implicit"
[junit4] 2> 942597 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain
"distrib-dup-test-chain-implicit"
[junit4] 2> 942597 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined
as default, creating implicit default
[junit4] 2> 942606 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 942608 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 942611 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 942613 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 942616 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.RequestHandlers.initHandlersFromConfig Registered paths:
/replication,standard,/admin/segments,/admin/file,/get,/admin/logging,/schema,/update/json,/admin/threads,/admin/properties,/admin/ping,/admin/system,/update,/admin/mbeans,/admin/plugins,/update/json/docs,/admin/luke,/update/csv,/config
[junit4] 2> 942618 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.initStatsCache Using default statsCache cache:
org.apache.solr.search.stats.LocalStatsCache
[junit4] 2> 942619 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.UpdateHandler.<init> Using UpdateLog implementation:
org.apache.solr.update.UpdateLog
[junit4] 2> 942619 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
[junit4] 2> 942620 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.CommitTracker.<init> Hard AutoCommit: disabled
[junit4] 2> 942620 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.CommitTracker.<init> Soft AutoCommit: disabled
[junit4] 2> 942620 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy:
maxMergeAtOnce=40, maxMergeAtOnceExplicit=19, maxMergedSegmentMB=96.5947265625,
floorSegmentMB=1.111328125, forceMergeDeletesPctAllowed=1.598420489017689,
segmentsPerTier=14.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0
[junit4] 2> 942620 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
[junit4] 2>
commit{dir=MockDirectoryWrapper(RAMDirectory@63ba3fa5
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@58a89cc9),segFN=segments_1,generation=1}
[junit4] 2> 942620 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
[junit4] 2> 942621 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oass.SolrIndexSearcher.<init> Opening Searcher@51736ede[collection1] main
[junit4] 2> 942621 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max value
of version field
[junit4] 2> 942621 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of _version_
for 256 version buckets from index
[junit4] 2> 942621 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_,
cannot seed version bucket highest value from index
[junit4] 2> 942621 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version in
index or recent updates, using new clock 1501826784289619968
[junit4] 2> 942621 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasu.UpdateLog.seedBucketsWithHighestVersion Took 0 ms to seed version buckets
with highest version 1501826784289619968
[junit4] 2> 942622 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for
the RestManager with znodeBase: /configs/conf1
[junit4] 2> 942623 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured
ZooKeeperStorageIO with znodeBase: /configs/conf1
[junit4] 2> 942623 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.RestManager.init Initializing RestManager with initArgs: {}
[junit4] 2> 942623 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.ManagedResourceStorage.load Reading _rest_managed.json using
ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 942625 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found
for znode /configs/conf1/_rest_managed.json
[junit4] 2> 942625 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using
ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 942625 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasr.RestManager.init Initializing 0 registered ManagedResources
[junit4] 2> 942625 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oash.ReplicationHandler.inform Commits will be reserved for 10000
[junit4] 2> 942625 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
[junit4] 2> 942625 T5604 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.SolrCore.registerSearcher [collection1] Registered new searcher
Searcher@51736ede[collection1]
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
[junit4] 2> 942626 T5603 n:127.0.0.1:54400__zz%2Fp x:collection1
oasc.CoreContainer.registerCore registering core: collection1
[junit4] 2> 942626 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ZkController.register Register replica -
core:collection1 address:http://127.0.0.1:54400/_zz/p
collection:control_collection shard:shard1
[junit4] 2> 942626 T5571 n:127.0.0.1:54400__zz%2Fp
oass.SolrDispatchFilter.init
user.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1
[junit4] 2> 942626 T5571 n:127.0.0.1:54400__zz%2Fp
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
[junit4] 2> 942634 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running
the leader process for shard shard1
[junit4] 2> 942636 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 942636 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp
Enough replicas found to continue.
[junit4] 2> 942636 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may
be the new leader - try and sync
[junit4] 2> ASYNC NEW_CORE C2132 name=collection1
org.apache.solr.core.SolrCore@4d479da5
url=http://127.0.0.1:54400/_zz/p/collection1 node=127.0.0.1:54400__zz%2Fp
C2132_STATE=coll:control_collection core:collection1
props:{base_url=http://127.0.0.1:54400/_zz/p, core=collection1,
node_name=127.0.0.1:54400__zz%2Fp, state=down}
[junit4] 2> 942637 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 C2132 oasc.SyncStrategy.sync Sync replicas to
http://127.0.0.1:54400/_zz/p/collection1/
[junit4] 2> 942637 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 C2132 oasc.SyncStrategy.syncReplicas Sync Success - now
sync replicas to me
[junit4] 2> 942637 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 C2132 oasc.SyncStrategy.syncToMe
http://127.0.0.1:54400/_zz/p/collection1/ has no replicas
[junit4] 2> 942637 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am
the new leader: http://127.0.0.1:54400/_zz/p/collection1/ shard1
[junit4] 2> 942637 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "operation":"leader",
[junit4] 2> "shard":"shard1",
[junit4] 2> "collection":"control_collection"} current state
version: 1
[junit4] 2> 942638 T5571 oasc.ChaosMonkey.monkeyLog monkey: init - expire
sessions:false cause connection loss:false
[junit4] 2> 942638 T5571 oasc.AbstractFullDistribZkTestBase.createJettys
Creating collection1 with stateFormat=2
[junit4] 2> 942642 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 942643 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "operation":"leader",
[junit4] 2> "shard":"shard1",
[junit4] 2> "collection":"control_collection",
[junit4] 2> "base_url":"http://127.0.0.1:54400/_zz/p",
[junit4] 2> "core":"collection1",
[junit4] 2> "state":"active"} current state version: 1
[junit4] 2> 942647 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "operation":"create",
[junit4] 2> "name":"collection1",
[junit4] 2> "numShards":"2",
[junit4] 2> "stateFormat":"2"} current state version: 1
[junit4] 2> 942647 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ClusterStateMutator.createCollection building a new cName: collection1
[junit4] 2> 942653 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 942653 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ZkStateWriter.writePendingUpdates going to create_collection
/collections/collection1/state.json
[junit4] 2> 942688 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ZkController.register We are
http://127.0.0.1:54400/_zz/p/collection1/ and leader is
http://127.0.0.1:54400/_zz/p/collection1/
[junit4] 2> 942688 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ZkController.register No LogReplay needed for
core=collection1 baseURL=http://127.0.0.1:54400/_zz/p
[junit4] 2> 942688 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ZkController.checkRecovery I am the leader, no
recovery necessary
[junit4] 2> 942688 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ZkController.publish publishing core=collection1
state=active collection=control_collection
[junit4] 2> 942688 T5607 n:127.0.0.1:54400__zz%2Fp c:control_collection
s:shard1 x:collection1 oasc.ZkController.publish numShards not found on
descriptor - reading it from system property
[junit4] 2> 942690 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 942691 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54400/_zz/p",
[junit4] 2> "shard":"shard1",
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"control_collection",
[junit4] 2> "node_name":"127.0.0.1:54400__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "core_node_name":"core_node1",
[junit4] 2> "state":"active"} current state version: 3
[junit4] 2> 942692 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Update state numShards=2 message={
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54400/_zz/p",
[junit4] 2> "shard":"shard1",
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"control_collection",
[junit4] 2> "node_name":"127.0.0.1:54400__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "core_node_name":"core_node1",
[junit4] 2> "state":"active"}
[junit4] 2> 942917 T5571 oas.SolrTestCaseJ4.writeCoreProperties Writing
core.properties file to
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1
[junit4] 2> 942920 T5571 oasc.AbstractFullDistribZkTestBase.createJettys
create jetty 1 in directory
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001
[junit4] 2> 942921 T5571 oejs.Server.doStart jetty-9.2.10.v20150310
[junit4] 2> 942923 T5571 oejsh.ContextHandler.doStart Started
o.e.j.s.ServletContextHandler@79e8d202{/_zz/p,null,AVAILABLE}
[junit4] 2> 942924 T5571 oejs.AbstractConnector.doStart Started
ServerConnector@2433bfd5{HTTP/1.1}{127.0.0.1:54429}
[junit4] 2> 942925 T5571 oejs.Server.doStart Started @950428ms
[junit4] 2> 942925 T5571 oascse.JettySolrRunner$1.lifeCycleStarted Jetty
properties: {hostPort=54428,
solr.data.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\tempDir-001/jetty1,
coreRootDirectory=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores, solrconfig=solrconfig.xml,
hostContext=/_zz/p}
[junit4] 2> 942926 T5571 oass.SolrDispatchFilter.init
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@4e857327
[junit4] 2> 942926 T5571 oasc.SolrResourceLoader.<init> new
SolrResourceLoader for directory:
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\'
[junit4] 2> 942950 T5571 oasc.SolrXmlConfig.fromFile Loading container
configuration from
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\solr.xml
[junit4] 2> 942971 T5571 oasc.CorePropertiesLocator.<init> Config-defined
core root directory:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores
[junit4] 2> 942972 T5571 oasc.CoreContainer.<init> New CoreContainer
276554978
[junit4] 2> 942972 T5571 oasc.CoreContainer.load Loading cores into
CoreContainer
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\]
[junit4] 2> 942972 T5571 oasc.CoreContainer.load loading shared library:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\lib
[junit4] 2> 942972 T5571 oasc.SolrResourceLoader.addToClassLoader WARN
Can't find (or read) directory to add to classloader: lib (resolved as:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\lib).
[junit4] 2> 942980 T5571 oashc.HttpShardHandlerFactory.init created with
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost :
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize :
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy :
false,useRetries : false,
[junit4] 2> 942982 T5571 oasu.UpdateShardHandler.<init> Creating
UpdateShardHandler HTTP client with params:
socketTimeout=340000&connTimeout=45000&retry=true
[junit4] 2> 942983 T5571 oasl.LogWatcher.createWatcher SLF4J impl is
org.slf4j.impl.Log4jLoggerFactory
[junit4] 2> 942983 T5571 oasl.LogWatcher.newRegisteredLogWatcher
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
[junit4] 2> 942983 T5571 oasc.CoreContainer.load Node Name: 127.0.0.1
[junit4] 2> 942984 T5571 oasc.ZkContainer.initZooKeeper Zookeeper
client=127.0.0.1:54393/solr
[junit4] 2> 942984 T5571 oasc.ZkController.checkChrootPath zkHost includes
chroot
[junit4] 2> 943895 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.ZkController.createEphemeralLiveNode Register node as live in
ZooKeeper:/live_nodes/127.0.0.1:54428__zz%2Fp
[junit4] 2> 943900 T5571 n:127.0.0.1:54428__zz%2Fp oasc.Overseer.close
Overseer (id=null) closing
[junit4] 2> 943902 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin used.
[junit4] 2> 943903 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist.
Skipping setup for authorization module.
[junit4] 2> 943905 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.CorePropertiesLocator.discover Looking for core definitions underneath
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores
[junit4] 2> 943906 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {loadOnStartup=true,
schema=schema.xml,
instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1, config=solrconfig.xml,
shard=, name=collection1,
absoluteInstDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\, coreNodeName=,
transient=false, dataDir=data\, collection=collection1}
[junit4] 2> 943906 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\
[junit4] 2> 943906 T5571 n:127.0.0.1:54428__zz%2Fp
oasc.CorePropertiesLocator.discover Found 1 core definitions
[junit4] 2> 943909 T5632 n:127.0.0.1:54428__zz%2Fp c:collection1
x:collection1 oasc.ZkController.publish publishing core=collection1 state=down
collection=collection1
[junit4] 2> 943909 T5632 n:127.0.0.1:54428__zz%2Fp c:collection1
x:collection1 oasc.ZkController.publish numShards not found on descriptor -
reading it from system property
[junit4] 2> 943911 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 943911 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.ZkController.preRegister Registering watch for external collection
collection1
[junit4] 2> 943912 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54428/_zz/p",
[junit4] 2> "shard":null,
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"collection1",
[junit4] 2> "node_name":"127.0.0.1:54428__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "state":"down"} current state version: 4
[junit4] 2> 943912 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Update state numShards=2 message={
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54428/_zz/p",
[junit4] 2> "shard":null,
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"collection1",
[junit4] 2> "node_name":"127.0.0.1:54428__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "state":"down"}
[junit4] 2> 943912 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Collection already exists with numShards=2
[junit4] 2> 943912 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard2
[junit4] 2> 943912 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.ZkController.waitForCoreNodeName look for our core node name
[junit4] 2> 944004 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ZkStateWriter.writePendingUpdates going to update_collection
/collections/collection1/state.json version: 0
[junit4] 2> 944825 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for
collection1
[junit4] 2> 944825 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
[junit4] 2> 944826 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.ZkController.createCollectionZkNode Collection zkNode exists
[junit4] 2> 944827 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory:
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\'
[junit4] 2> 944845 T5632 n:127.0.0.1:54428__zz%2Fp oasc.Config.<init>
loaded config solrconfig.xml with version 0
[junit4] 2> 944853 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
[junit4] 2> 944875 T5632 n:127.0.0.1:54428__zz%2Fp oasc.SolrConfig.<init>
Using Lucene MatchVersion: 5.2.0
[junit4] 2> 944908 T5632 n:127.0.0.1:54428__zz%2Fp oasc.SolrConfig.<init>
Loaded SolrConfig: solrconfig.xml
[junit4] 2> 944910 T5632 n:127.0.0.1:54428__zz%2Fp
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
[junit4] 2> 944918 T5632 n:127.0.0.1:54428__zz%2Fp
oass.IndexSchema.readSchema [collection1] Schema name=test
[junit4] 2> 945192 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider.init Initialized with
rates=open-exchange-rates.json, refreshInterval=1440.
[junit4] 2> 945203 T5632 n:127.0.0.1:54428__zz%2Fp
oass.IndexSchema.readSchema default search field in schema is text
[junit4] 2> 945205 T5632 n:127.0.0.1:54428__zz%2Fp
oass.IndexSchema.readSchema unique key field: id
[junit4] 2> 945219 T5632 n:127.0.0.1:54428__zz%2Fp
oass.FileExchangeRateProvider.reload Reloading exchange rates from file
currency.xml
[junit4] 2> 945222 T5632 n:127.0.0.1:54428__zz%2Fp
oass.FileExchangeRateProvider.reload Reloading exchange rates from file
currency.xml
[junit4] 2> 945226 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from
open-exchange-rates.json
[junit4] 2> 945228 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key
IMPORTANT NOTE
[junit4] 2> 945228 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key,
got STRING
[junit4] 2> 945229 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from
open-exchange-rates.json
[junit4] 2> 945230 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key
IMPORTANT NOTE
[junit4] 2> 945230 T5632 n:127.0.0.1:54428__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key,
got STRING
[junit4] 2> 945230 T5632 n:127.0.0.1:54428__zz%2Fp
oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration
from collection collection1
[junit4] 2> 945230 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
[junit4] 2> 945231 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at
[C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\], dataDir=[null]
[junit4] 2> 945231 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@2db36e6f
[junit4] 2> 945232 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.CachingDirectoryFactory.get return new directory for
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\data\
[junit4] 2> 945232 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.getNewIndexDir New index directory detected: old=null
new=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\data\index/
[junit4] 2> 945232 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.initIndex WARN [collection1] Solr index directory
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\data\index' doesn't exist.
Creating new index...
[junit4] 2> 945232 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.CachingDirectoryFactory.get return new directory for
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-1-001\cores\collection1\data\index
[junit4] 2> 945233 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy:
minMergeSize=1000, mergeFactor=14, maxMergeSize=9223372036854775807,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.8856516091129182]
[junit4] 2> 945233 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
[junit4] 2>
commit{dir=MockDirectoryWrapper(RAMDirectory@729c2df
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@46b8c225),segFN=segments_1,generation=1}
[junit4] 2> 945234 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
[junit4] 2> 945240 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"nodistrib"
[junit4] 2> 945240 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"dedupe"
[junit4] 2> 945241 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
[junit4] 2> 945241 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"stored_sig"
[junit4] 2> 945241 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
[junit4] 2> 945241 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"distrib-dup-test-chain-explicit"
[junit4] 2> 945242 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"distrib-dup-test-chain-implicit"
[junit4] 2> 945242 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain
"distrib-dup-test-chain-implicit"
[junit4] 2> 945242 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined
as default, creating implicit default
[junit4] 2> 945252 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 945255 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 945257 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 945260 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 945267 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.RequestHandlers.initHandlersFromConfig Registered paths:
/replication,standard,/admin/segments,/admin/file,/get,/admin/logging,/schema,/update/json,/admin/threads,/admin/properties,/admin/ping,/admin/system,/update,/admin/mbeans,/admin/plugins,/update/json/docs,/admin/luke,/update/csv,/config
[junit4] 2> 945269 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.initStatsCache Using default statsCache cache:
org.apache.solr.search.stats.LocalStatsCache
[junit4] 2> 945270 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.UpdateHandler.<init> Using UpdateLog implementation:
org.apache.solr.update.UpdateLog
[junit4] 2> 945270 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
[junit4] 2> 945272 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.CommitTracker.<init> Hard AutoCommit: disabled
[junit4] 2> 945272 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.CommitTracker.<init> Soft AutoCommit: disabled
[junit4] 2> 945273 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy:
maxMergeAtOnce=40, maxMergeAtOnceExplicit=19, maxMergedSegmentMB=96.5947265625,
floorSegmentMB=1.111328125, forceMergeDeletesPctAllowed=1.598420489017689,
segmentsPerTier=14.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0
[junit4] 2> 945274 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
[junit4] 2>
commit{dir=MockDirectoryWrapper(RAMDirectory@729c2df
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@46b8c225),segFN=segments_1,generation=1}
[junit4] 2> 945274 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
[junit4] 2> 945274 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oass.SolrIndexSearcher.<init> Opening Searcher@52721ac8[collection1] main
[junit4] 2> 945275 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max value
of version field
[junit4] 2> 945275 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of _version_
for 256 version buckets from index
[junit4] 2> 945275 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_,
cannot seed version bucket highest value from index
[junit4] 2> 945275 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version in
index or recent updates, using new clock 1501826787072540672
[junit4] 2> 945275 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasu.UpdateLog.seedBucketsWithHighestVersion Took 0 ms to seed version buckets
with highest version 1501826787072540672
[junit4] 2> 945278 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for
the RestManager with znodeBase: /configs/conf1
[junit4] 2> 945279 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured
ZooKeeperStorageIO with znodeBase: /configs/conf1
[junit4] 2> 945279 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.RestManager.init Initializing RestManager with initArgs: {}
[junit4] 2> 945279 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.ManagedResourceStorage.load Reading _rest_managed.json using
ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 945280 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.ManagedResourceStorage$ZooKeeperStorageIO.openInputStream No data found
for znode /configs/conf1/_rest_managed.json
[junit4] 2> 945280 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.ManagedResourceStorage.load Loaded null at path _rest_managed.json using
ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 945280 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasr.RestManager.init Initializing 0 registered ManagedResources
[junit4] 2> 945280 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oash.ReplicationHandler.inform Commits will be reserved for 10000
[junit4] 2> 945281 T5633 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.SolrCore.registerSearcher [collection1] Registered new searcher
Searcher@52721ac8[collection1]
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
[junit4] 2> 945282 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.ZkController.getConfDirListeners watch zkdir /configs/conf1
[junit4] 2> 945283 T5632 n:127.0.0.1:54428__zz%2Fp x:collection1
oasc.CoreContainer.registerCore registering core: collection1
[junit4] 2> 945285 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ZkController.register Register replica - core:collection1
address:http://127.0.0.1:54428/_zz/p collection:collection1 shard:shard2
[junit4] 2> 945285 T5571 n:127.0.0.1:54428__zz%2Fp
oass.SolrDispatchFilter.init
user.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1
[junit4] 2> 945285 T5571 n:127.0.0.1:54428__zz%2Fp
oass.SolrDispatchFilter.init SolrDispatchFilter.init() done
[junit4] 2> 945292 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess Running the
leader process for shard shard2
[junit4] 2> 945294 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 945295 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ShardLeaderElectionContext.waitForReplicasToComeUp Enough
replicas found to continue.
[junit4] 2> 945295 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I may be the new
leader - try and sync
[junit4] 2> ASYNC NEW_CORE C2133 name=collection1
org.apache.solr.core.SolrCore@28a754a0
url=http://127.0.0.1:54428/_zz/p/collection1 node=127.0.0.1:54428__zz%2Fp
C2133_STATE=coll:collection1 core:collection1
props:{base_url=http://127.0.0.1:54428/_zz/p, core=collection1,
node_name=127.0.0.1:54428__zz%2Fp, state=down}
[junit4] 2> 945295 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 C2133 oasc.SyncStrategy.sync Sync replicas to
http://127.0.0.1:54428/_zz/p/collection1/
[junit4] 2> 945295 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 C2133 oasc.SyncStrategy.syncReplicas Sync Success - now sync
replicas to me
[junit4] 2> 945295 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "operation":"leader",
[junit4] 2> "shard":"shard2",
[junit4] 2> "collection":"collection1"} current state version: 4
[junit4] 2> 945295 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 C2133 oasc.SyncStrategy.syncToMe
http://127.0.0.1:54428/_zz/p/collection1/ has no replicas
[junit4] 2> 945296 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ShardLeaderElectionContext.runLeaderProcess I am the new
leader: http://127.0.0.1:54428/_zz/p/collection1/ shard2
[junit4] 2> 945297 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ZkStateWriter.writePendingUpdates going to update_collection
/collections/collection1/state.json version: 1
[junit4] 2> 945304 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "operation":"leader",
[junit4] 2> "shard":"shard2",
[junit4] 2> "collection":"collection1",
[junit4] 2> "base_url":"http://127.0.0.1:54428/_zz/p",
[junit4] 2> "core":"collection1",
[junit4] 2> "state":"active"} current state version: 4
[junit4] 2> 945305 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ZkStateWriter.writePendingUpdates going to update_collection
/collections/collection1/state.json version: 2
[junit4] 2> 945309 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 945354 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ZkController.register We are
http://127.0.0.1:54428/_zz/p/collection1/ and leader is
http://127.0.0.1:54428/_zz/p/collection1/
[junit4] 2> 945354 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ZkController.register No LogReplay needed for
core=collection1 baseURL=http://127.0.0.1:54428/_zz/p
[junit4] 2> 945354 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ZkController.checkRecovery I am the leader, no recovery
necessary
[junit4] 2> 945354 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ZkController.publish publishing core=collection1
state=active collection=collection1
[junit4] 2> 945354 T5636 n:127.0.0.1:54428__zz%2Fp c:collection1 s:shard2
x:collection1 oasc.ZkController.publish numShards not found on descriptor -
reading it from system property
[junit4] 2> 945356 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 945357 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54428/_zz/p",
[junit4] 2> "shard":"shard2",
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"collection1",
[junit4] 2> "node_name":"127.0.0.1:54428__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "core_node_name":"core_node1",
[junit4] 2> "state":"active"} current state version: 4
[junit4] 2> 945358 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Update state numShards=2 message={
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54428/_zz/p",
[junit4] 2> "shard":"shard2",
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"collection1",
[junit4] 2> "node_name":"127.0.0.1:54428__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "core_node_name":"core_node1",
[junit4] 2> "state":"active"}
[junit4] 2> 945359 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ZkStateWriter.writePendingUpdates going to update_collection
/collections/collection1/state.json version: 3
[junit4] 2> 945704 T5571 oas.SolrTestCaseJ4.writeCoreProperties Writing
core.properties file to
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1
[junit4] 2> 945708 T5571 oasc.AbstractFullDistribZkTestBase.createJettys
create jetty 2 in directory
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001
[junit4] 2> 945709 T5571 oejs.Server.doStart jetty-9.2.10.v20150310
[junit4] 2> 945711 T5571 oejsh.ContextHandler.doStart Started
o.e.j.s.ServletContextHandler@7dfd19a8{/_zz/p,null,AVAILABLE}
[junit4] 2> 945713 T5571 oejs.AbstractConnector.doStart Started
ServerConnector@739fa536{HTTP/1.1}{127.0.0.1:54451}
[junit4] 2> 945713 T5571 oejs.Server.doStart Started @953217ms
[junit4] 2> 945713 T5571 oascse.JettySolrRunner$1.lifeCycleStarted Jetty
properties: {solrconfig=solrconfig.xml,
solr.data.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\tempDir-001/jetty2, hostPort=54450,
coreRootDirectory=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores, hostContext=/_zz/p}
[junit4] 2> 945714 T5571 oass.SolrDispatchFilter.init
SolrDispatchFilter.init()sun.misc.Launcher$AppClassLoader@4e857327
[junit4] 2> 945714 T5571 oasc.SolrResourceLoader.<init> new
SolrResourceLoader for directory:
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\'
[junit4] 2> 945739 T5571 oasc.SolrXmlConfig.fromFile Loading container
configuration from
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\solr.xml
[junit4] 2> 945762 T5571 oasc.CorePropertiesLocator.<init> Config-defined
core root directory:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores
[junit4] 2> 945762 T5571 oasc.CoreContainer.<init> New CoreContainer
665764419
[junit4] 2> 945762 T5571 oasc.CoreContainer.load Loading cores into
CoreContainer
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\]
[junit4] 2> 945762 T5571 oasc.CoreContainer.load loading shared library:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\lib
[junit4] 2> 945763 T5571 oasc.SolrResourceLoader.addToClassLoader WARN
Can't find (or read) directory to add to classloader: lib (resolved as:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\lib).
[junit4] 2> 945772 T5571 oashc.HttpShardHandlerFactory.init created with
socketTimeout : 90000,urlScheme : ,connTimeout : 15000,maxConnectionsPerHost :
20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize :
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy :
false,useRetries : false,
[junit4] 2> 945775 T5571 oasu.UpdateShardHandler.<init> Creating
UpdateShardHandler HTTP client with params:
socketTimeout=340000&connTimeout=45000&retry=true
[junit4] 2> 945776 T5571 oasl.LogWatcher.createWatcher SLF4J impl is
org.slf4j.impl.Log4jLoggerFactory
[junit4] 2> 945776 T5571 oasl.LogWatcher.newRegisteredLogWatcher
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
[junit4] 2> 945776 T5571 oasc.CoreContainer.load Node Name: 127.0.0.1
[junit4] 2> 945776 T5571 oasc.ZkContainer.initZooKeeper Zookeeper
client=127.0.0.1:54393/solr
[junit4] 2> 945777 T5571 oasc.ZkController.checkChrootPath zkHost includes
chroot
[junit4] 2> 946780 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.ZkController.createEphemeralLiveNode Register node as live in
ZooKeeper:/live_nodes/127.0.0.1:54450__zz%2Fp
[junit4] 2> 946786 T5571 n:127.0.0.1:54450__zz%2Fp oasc.Overseer.close
Overseer (id=null) closing
[junit4] 2> 946787 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.CoreContainer.initializeAuthenticationPlugin No authentication plugin used.
[junit4] 2> 946788 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.CoreContainer.intializeAuthorizationPlugin Security conf doesn't exist.
Skipping setup for authorization module.
[junit4] 2> 946789 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.CorePropertiesLocator.discover Looking for core definitions underneath
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores
[junit4] 2> 946790 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.CoreDescriptor.<init> CORE DESCRIPTOR: {loadOnStartup=true,
transient=false, coreNodeName=, schema=schema.xml, name=collection1, shard=,
config=solrconfig.xml, collection=collection1,
instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1, dataDir=data\,
absoluteInstDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\}
[junit4] 2> 946790 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.CorePropertiesLocator.discoverUnder Found core collection1 in
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\
[junit4] 2> 946791 T5571 n:127.0.0.1:54450__zz%2Fp
oasc.CorePropertiesLocator.discover Found 1 core definitions
[junit4] 2> 946794 T5655 n:127.0.0.1:54450__zz%2Fp c:collection1
x:collection1 oasc.ZkController.publish publishing core=collection1 state=down
collection=collection1
[junit4] 2> 946794 T5655 n:127.0.0.1:54450__zz%2Fp c:collection1
x:collection1 oasc.ZkController.publish numShards not found on descriptor -
reading it from system property
[junit4] 2> 946795 T5599 n:127.0.0.1:54400__zz%2Fp
oasc.DistributedQueue$LatchWatcher.process NodeChildrenChanged fired on path
/overseer/queue state SyncConnected
[junit4] 2> 946796 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.ZkController.preRegister Registering watch for external collection
collection1
[junit4] 2> 946796 T5600 n:127.0.0.1:54400__zz%2Fp
oasc.Overseer$ClusterStateUpdater.run processMessage: queueSize: 1, message = {
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54450/_zz/p",
[junit4] 2> "shard":null,
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"collection1",
[junit4] 2> "node_name":"127.0.0.1:54450__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "state":"down"} current state version: 4
[junit4] 2> 946797 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Update state numShards=2 message={
[junit4] 2> "numShards":"2",
[junit4] 2> "base_url":"http://127.0.0.1:54450/_zz/p",
[junit4] 2> "shard":null,
[junit4] 2> "core":"collection1",
[junit4] 2> "collection":"collection1",
[junit4] 2> "node_name":"127.0.0.1:54450__zz%2Fp",
[junit4] 2> "operation":"state",
[junit4] 2> "roles":null,
[junit4] 2> "state":"down"}
[junit4] 2> 946797 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Collection already exists with numShards=2
[junit4] 2> 946797 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ReplicaMutator.updateState Assigning new node to shard shard=shard1
[junit4] 2> 946798 T5600 n:127.0.0.1:54400__zz%2Fp
oasco.ZkStateWriter.writePendingUpdates going to update_collection
/collections/collection1/state.json version: 4
[junit4] 2> 946798 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.ZkController.waitForCoreNodeName look for our core node name
[junit4] 2> 947778 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.ZkController.waitForShardId waiting to find shard id in clusterstate for
collection1
[junit4] 2> 947778 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.ZkController.createCollectionZkNode Check for collection zkNode:collection1
[junit4] 2> 947779 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.ZkController.createCollectionZkNode Collection zkNode exists
[junit4] 2> 947780 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.SolrResourceLoader.<init> new SolrResourceLoader for directory:
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\'
[junit4] 2> 947796 T5655 n:127.0.0.1:54450__zz%2Fp oasc.Config.<init>
loaded config solrconfig.xml with version 0
[junit4] 2> 947807 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.SolrConfig.refreshRequestParams current version of requestparams : -1
[junit4] 2> 947821 T5655 n:127.0.0.1:54450__zz%2Fp oasc.SolrConfig.<init>
Using Lucene MatchVersion: 5.2.0
[junit4] 2> 947857 T5655 n:127.0.0.1:54450__zz%2Fp oasc.SolrConfig.<init>
Loaded SolrConfig: solrconfig.xml
[junit4] 2> 947857 T5655 n:127.0.0.1:54450__zz%2Fp
oass.IndexSchema.readSchema Reading Solr Schema from /configs/conf1/schema.xml
[junit4] 2> 947865 T5655 n:127.0.0.1:54450__zz%2Fp
oass.IndexSchema.readSchema [collection1] Schema name=test
[junit4] 2> 948188 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider.init Initialized with
rates=open-exchange-rates.json, refreshInterval=1440.
[junit4] 2> 948198 T5655 n:127.0.0.1:54450__zz%2Fp
oass.IndexSchema.readSchema default search field in schema is text
[junit4] 2> 948200 T5655 n:127.0.0.1:54450__zz%2Fp
oass.IndexSchema.readSchema unique key field: id
[junit4] 2> 948213 T5655 n:127.0.0.1:54450__zz%2Fp
oass.FileExchangeRateProvider.reload Reloading exchange rates from file
currency.xml
[junit4] 2> 948215 T5655 n:127.0.0.1:54450__zz%2Fp
oass.FileExchangeRateProvider.reload Reloading exchange rates from file
currency.xml
[junit4] 2> 948223 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from
open-exchange-rates.json
[junit4] 2> 948224 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key
IMPORTANT NOTE
[junit4] 2> 948224 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key,
got STRING
[junit4] 2> 948224 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider.reload Reloading exchange rates from
open-exchange-rates.json
[junit4] 2> 948226 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Unknown key
IMPORTANT NOTE
[junit4] 2> 948226 T5655 n:127.0.0.1:54450__zz%2Fp
oass.OpenExchangeRatesOrgProvider$OpenExchangeRates.<init> WARN Expected key,
got STRING
[junit4] 2> 948226 T5655 n:127.0.0.1:54450__zz%2Fp
oasc.CoreContainer.create Creating SolrCore 'collection1' using configuration
from collection collection1
[junit4] 2> 948226 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrCore.initDirectoryFactory org.apache.solr.core.MockDirectoryFactory
[junit4] 2> 948226 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrCore.<init> [[collection1] ] Opening new SolrCore at
[C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\], dataDir=[null]
[junit4] 2> 948227 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.JmxMonitoredMap.<init> JMX monitoring is enabled. Adding Solr mbeans to
JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@2db36e6f
[junit4] 2> 948227 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.CachingDirectoryFactory.get return new directory for
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\data\
[junit4] 2> 948227 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrCore.getNewIndexDir New index directory detected: old=null
new=C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\data\index/
[junit4] 2> 948227 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrCore.initIndex WARN [collection1] Solr index directory
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\data\index' doesn't exist.
Creating new index...
[junit4] 2> 948228 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.CachingDirectoryFactory.get return new directory for
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-2-001\cores\collection1\data\index
[junit4] 2> 948228 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy:
minMergeSize=1000, mergeFactor=14, maxMergeSize=9223372036854775807,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.8856516091129182]
[junit4] 2> 948228 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.onCommit SolrDeletionPolicy.onCommit: commits: num=1
[junit4] 2>
commit{dir=MockDirectoryWrapper(RAMDirectory@6bc61dc8
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@1bcd6cbf),segFN=segments_1,generation=1}
[junit4] 2> 948228 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
[junit4] 2> 948233 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"nodistrib"
[junit4] 2> 948233 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"dedupe"
[junit4] 2> 948233 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain "dedupe"
[junit4] 2> 948234 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"stored_sig"
[junit4] 2> 948234 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain "stored_sig"
[junit4] 2> 948234 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"distrib-dup-test-chain-explicit"
[junit4] 2> 948234 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init creating updateRequestProcessorChain
"distrib-dup-test-chain-implicit"
[junit4] 2> 948235 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasup.UpdateRequestProcessorChain.init inserting
DistributedUpdateProcessorFactory into updateRequestProcessorChain
"distrib-dup-test-chain-implicit"
[junit4] 2> 948235 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrCore.loadUpdateProcessorChains no updateRequestProcessorChain defined
as default, creating implicit default
[junit4] 2> 948242 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 948244 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 948246 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 948248 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oashl.XMLLoader.init xsltCacheLifetimeSeconds=60
[junit4] 2> 948253 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.RequestHandlers.initHandlersFromConfig Registered paths:
/replication,standard,/admin/segments,/admin/file,/get,/admin/logging,/schema,/update/json,/admin/threads,/admin/properties,/admin/ping,/admin/system,/update,/admin/mbeans,/admin/plugins,/update/json/docs,/admin/luke,/update/csv,/config
[junit4] 2> 948255 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrCore.initStatsCache Using default statsCache cache:
org.apache.solr.search.stats.LocalStatsCache
[junit4] 2> 948256 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.UpdateHandler.<init> Using UpdateLog implementation:
org.apache.solr.update.UpdateLog
[junit4] 2> 948256 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.UpdateLog.init Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=256
[junit4] 2> 948258 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.CommitTracker.<init> Hard AutoCommit: disabled
[junit4] 2> 948258 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.CommitTracker.<init> Soft AutoCommit: disabled
[junit4] 2> 948259 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.RandomMergePolicy.<init> RandomMergePolicy wrapping class
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy:
maxMergeAtOnce=40, maxMergeAtOnceExplicit=19, maxMergedSegmentMB=96.5947265625,
floorSegmentMB=1.111328125, forceMergeDeletesPctAllowed=1.598420489017689,
segmentsPerTier=14.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0
[junit4] 2> 948259 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.onInit SolrDeletionPolicy.onInit: commits: num=1
[junit4] 2>
commit{dir=MockDirectoryWrapper(RAMDirectory@6bc61dc8
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@1bcd6cbf),segFN=segments_1,generation=1}
[junit4] 2> 948259 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasc.SolrDeletionPolicy.updateCommits newest commit generation = 1
[junit4] 2> 948259 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oass.SolrIndexSearcher.<init> Opening Searcher@27a3fd03[collection1] main
[junit4] 2> 948259 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.UpdateLog.onFirstSearcher On first searcher opened, looking up max value
of version field
[junit4] 2> 948259 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.VersionInfo.getMaxVersionFromIndex Refreshing highest value of _version_
for 256 version buckets from index
[junit4] 2> 948260 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.VersionInfo.getMaxVersionFromIndex WARN No terms found for _version_,
cannot seed version bucket highest value from index
[junit4] 2> 948260 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.UpdateLog.seedBucketsWithHighestVersion WARN Could not find max version in
index or recent updates, using new clock 1501826790202540032
[junit4] 2> 948260 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasu.UpdateLog.seedBucketsWithHighestVersion Took 1 ms to seed version buckets
with highest version 1501826790202540032
[junit4] 2> 948261 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasr.ManagedResourceStorage.newStorageIO Setting up ZooKeeper-based storage for
the RestManager with znodeBase: /configs/conf1
[junit4] 2> 948262 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1
oasr.ManagedResourceStorage$ZooKeeperStorageIO.configure Configured
ZooKeeperStorageIO with znodeBase: /configs/conf1
[junit4] 2> 948262 T5655 n:127.0.0.1:54450__zz%2Fp x:collection1 oasr
[...truncated too long message...]
.java:436)
[junit4] 2> at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
[junit4] 2> at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
[junit4] 2> at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
[junit4] 2> at
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:105)
[junit4] 2> at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
[junit4] 2> at
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
[junit4] 2> at
org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:300)
[junit4] 2> at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
[junit4] 2> at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
[junit4] 2> at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
[junit4] 2> at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
[junit4] 2> at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
[junit4] 2> at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
[junit4] 2> at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
[junit4] 2> at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
[junit4] 2> at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
[junit4] 2> at
org.eclipse.jetty.server.Server.handle(Server.java:497)
[junit4] 2> at
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
[junit4] 2> at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
[junit4] 2> at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
[junit4] 2> at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
[junit4] 2> at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
[junit4] 2> at java.lang.Thread.run(Thread.java:745)
[junit4] 2>
[junit4] 2> NOTE: test params are: codec=Lucene50, sim=DefaultSimilarity,
locale=sr_BA_#Latn, timezone=Canada/Mountain
[junit4] 2> NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.7.0_80
(64-bit)/cpus=3,threads=1,free=159373944,total=424148992
[junit4] 2> NOTE: All tests run in this JVM: [TestOrdValues,
SimplePostToolTest, DateMathParserTest, TestSchemaSimilarityResource,
TestFuzzyAnalyzedSuggestions, DistributedQueryComponentOptimizationTest,
TestNonDefinedSimilarityFactory, AtomicUpdatesTest,
HdfsWriteToMultipleCollectionsTest, TestDynamicFieldResource,
TestCloudPivotFacet, CollectionsAPIDistributedZkTest, TestObjectReleaseTracker,
TestConfigSets, TestTolerantSearch, TestCollapseQParserPlugin,
TermVectorComponentDistributedTest, TestRemoveLastDynamicCopyField,
DeleteShardTest, TestBulkSchemaAPI, TestExactStatsCache,
HdfsCollectionsAPIDistributedZkTest, TestSuggestSpellingConverter, TestDocSet,
TestTrie, TestMergePolicyConfig, DistribJoinFromCollectionTest,
BinaryUpdateRequestHandlerTest, TestUtils, BadIndexSchemaTest, BJQParserTest,
IndexBasedSpellCheckerTest, TestSolrXml, TestStandardQParsers,
TestCoreDiscovery, TestSearchPerf, UpdateRequestProcessorFactoryTest,
CustomCollectionTest, SearchHandlerTest, TestCloudInspectUtil,
TestManagedSchemaDynamicFieldResource, SpatialHeatmapFacetsTest,
SharedFSAutoReplicaFailoverTest, SolrCloudExampleTest, UpdateParamsTest,
SpellingQueryConverterTest, TestFastWriter, TestSchemaNameResource,
OverseerStatusTest, SpatialFilterTest, NotRequiredUniqueKeyTest,
HighlighterConfigTest, ReturnFieldsTest, BlockDirectoryTest,
DistributedQueryComponentCustomSortTest, QueryParsingTest,
TestRandomMergePolicy, TestFieldSortValues, BasicFunctionalityTest,
CopyFieldTest, TestJsonFacets, SpellCheckComponentTest,
RecoveryAfterSoftCommitTest, MultiThreadedOCPTest, TestCloudSchemaless,
TestSchemaManager, FileBasedSpellCheckerTest, QueryEqualityTest, TestRTGBase,
TestRandomFaceting, TestRangeQuery, LoggingHandlerTest, DateFieldTest,
AnalyticsQueryTest, DistributedExpandComponentTest,
TestSolrConfigHandlerConcurrent, BasicZkTest, TestExactSharedStatsCache,
AliasIntegrationTest, SolrRequestParserTest, TermsComponentTest,
HttpPartitionTest]
[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=HttpPartitionTest
-Dtests.seed=AFC1F4C965F8BAFE -Dtests.slow=true -Dtests.locale=sr_BA_#Latn
-Dtests.timezone=Canada/Mountain -Dtests.asserts=true
-Dtests.file.encoding=US-ASCII
[junit4] ERROR 0.00s J1 | HttpPartitionTest (suite) <<<
[junit4] > Throwable #1: java.lang.AssertionError: Some resources were
not closed, shutdown, or released.
[junit4] > at
__randomizedtesting.SeedInfo.seed([AFC1F4C965F8BAFE]:0)
[junit4] > at
org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:234)
[junit4] > at java.lang.Thread.run(Thread.java:745)Throwable #2:
java.io.IOException: Could not remove the following files (in the order of
attempts):
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog\tlog.0000000000000000000:
java.nio.file.FileSystemException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog\tlog.0000000000000000000:
The process cannot access the file because it is being used by another process.
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data\tlog
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2\data
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores\c8n_1x2_leader_session_loss_shard1_replica2
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores:
java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001\cores
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001: java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001\shard-3-001
[junit4] >
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001: java.nio.file.DirectoryNotEmptyException:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.HttpPartitionTest
AFC1F4C965F8BAFE-001
[junit4] > at org.apache.lucene.util.IOUtils.rm(IOUtils.java:294)
[junit4] > at java.lang.Thread.run(Thread.java:745)
[junit4] Completed [179/494] on J1 in 146.50s, 1 test, 1 failure, 1 error
<<< FAILURES!
[...truncated 1009 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:536: The
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:484: The
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:61: The
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\extra-targets.xml:39:
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build.xml:229: The
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\common-build.xml:511:
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\lucene\common-build.xml:1433:
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\lucene\common-build.xml:991:
There were test failures: 494 suites, 1970 tests, 2 suite-level errors, 66
ignored (34 assumptions)
Total time: 64 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]