[jira] [Commented] (ZOOKEEPER-1948) Enable JMX remote monitoring - Updated patch for review comments
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14117936#comment-14117936 ] Rakesh R commented on ZOOKEEPER-1948: - +1 lgtm. Enable JMX remote monitoring - Updated patch for review comments Key: ZOOKEEPER-1948 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1948 Project: ZooKeeper Issue Type: Improvement Components: server Affects Versions: 3.4.6 Environment: All Reporter: Biju Nair Assignee: Biju Nair Fix For: 3.4.7, 3.5.1 Attachments: ZOOKEEPER-1948.patch, ZOOKEEPER-1948.patch, ZOOKEEPER-1948.patch.v2, zookeeper-1948.patch The zooker server start up script includes the option to enable jmx monitoring but only locally. Can we update the script so that remote monitoring can also be enabled which will help in data collection and monitoring through a centralized monitoring tool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2026) Startup order in ServerCnxnFactory-ies is wrong
Stevo Slavic created ZOOKEEPER-2026: --- Summary: Startup order in ServerCnxnFactory-ies is wrong Key: ZOOKEEPER-2026 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2026 Project: ZooKeeper Issue Type: Bug Components: java client Affects Versions: 3.4.6 Reporter: Stevo Slavic Priority: Minor {{NIOServerCnxnFactory}} and {{NettyServerCnxnFactory}} {{startup}} method implementations are binding {{ZooKeeperServer}} too late, so in {{ZooKeeperServer}} in its startup can fail to register appropriate JMX MBean. See [this|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201409.mbox/%3CCAAUywg9-ad3oWfqRWahB9PyBEbg6%2Bd%3DDyj5PAUU7A%3Dm9wRncaw%40mail.gmail.com%3E] post on ZK user mailing list for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2026) Startup order in ServerCnxnFactory-ies is wrong
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stevo Slavic updated ZOOKEEPER-2026: Attachment: ZOOKEEPER-2026.patch Attached [^ZOOKEEPER-2026.patch] Startup order in ServerCnxnFactory-ies is wrong --- Key: ZOOKEEPER-2026 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2026 Project: ZooKeeper Issue Type: Bug Components: java client Affects Versions: 3.4.6 Reporter: Stevo Slavic Priority: Minor Attachments: ZOOKEEPER-2026.patch {{NIOServerCnxnFactory}} and {{NettyServerCnxnFactory}} {{startup}} method implementations are binding {{ZooKeeperServer}} too late, so in {{ZooKeeperServer}} in its startup can fail to register appropriate JMX MBean. See [this|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201409.mbox/%3CCAAUywg9-ad3oWfqRWahB9PyBEbg6%2Bd%3DDyj5PAUU7A%3Dm9wRncaw%40mail.gmail.com%3E] post on ZK user mailing list for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 25217: Improved system test
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25217/ --- (Updated Sept. 2, 2014, 7:28 a.m.) Review request for zookeeper. Repository: zookeeper Description --- Adding the ability to perform a system test of mixed workloads using read-only/mixed/write-only clients. In addition, adding few basic latency statistics. see https://issues.apache.org/jira/browse/ZOOKEEPER-2023 Diffs (updated) - ./src/java/systest/README.txt 1619360 ./src/java/systest/org/apache/zookeeper/test/system/GenerateLoad.java 1619360 Diff: https://reviews.apache.org/r/25217/diff/ Testing --- Thanks, Kfir Lev-Ari
[jira] [Updated] (ZOOKEEPER-2023) Improved system test
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kfir Lev-Ari updated ZOOKEEPER-2023: Attachment: (was: ZOOKEEPER-2023.patch) Improved system test Key: ZOOKEEPER-2023 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2023 Project: ZooKeeper Issue Type: Test Components: contrib-fatjar Affects Versions: 3.5.0 Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Priority: Minor Attachments: ZOOKEEPER-2023.patch, ZOOKEEPER-2023.patch Adding the ability to perform a system test of mixed workloads using read-only/mixed/write-only clients. In addition, adding few basic latency statistics. https://reviews.apache.org/r/25217/ Just in case it'll help someone, here is an example of how to run generate load system test: 1. Checkout zookeeper-trunk 2. Go to zookeeper-trunk, run ant jar compile-test 3. Go to zookeeper-trunk\src\contrib\fatjar, run ant jar 4. Copy zookeeper-dev-fatjar.jar from zookeeper-trunk\build\contrib\fatjar to each of the machines you wish to use. 5. On each server, assuming that you've created a valid ZK config file (e.g., zk.cfg) and a dataDir, run: 5.1 java -jar zookeeper-dev-fatjar.jar server ./zk.cfg 5.2 java -jar zookeeper-dev-fatjar.jar ic name of this server:its client port name of this server:its client port /sysTest 6. And finally, in order to run the test (from some machine), execute the command: java -jar zookeeper-dev-fatjar.jar generateLoad name of one of the servers:its client port /sysTest number of servers number of read-only clients number of mixed workload clients number of write-only clients Note that /sysTest is the same name that we used in 5.2. You'll see Preferred List is empty message, and after few seconds you should get notifications of Accepted connection from Socket[. Afterwards, just set the percentage of the mixed workload clients by entering percentage number and the test will start. Some explanation regarding the new output (which is printed every 6 seconds, and is reset every time you enter a new percentage). Interval: interval number time Test info: number of RO clientsxRO number of mixed workload clientsxtheir write percentage%W number of write only clientsxWO, percentiles [0.5, 0.9, 0.95, 0.99] Throughput: current interval throughput | minimum throughput until now average throughput until now maximum throughput until now Read latency: interval [interval's read latency values according to the percentiles], total [read latency values until now, according to the percentiles] Write latency: interval [interval's write latency values according to the percentiles], total [write latency values until now, according to the percentiles] Note that the throughput is requests per second, and latency is in ms. In addition, if you perform a read only test / write only test, you won't see the printout of write / read latency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2023) Improved system test
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kfir Lev-Ari updated ZOOKEEPER-2023: Attachment: (was: ZOOKEEPER-2023.patch) Improved system test Key: ZOOKEEPER-2023 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2023 Project: ZooKeeper Issue Type: Test Components: contrib-fatjar Affects Versions: 3.5.0 Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Priority: Minor Attachments: ZOOKEEPER-2023.patch Adding the ability to perform a system test of mixed workloads using read-only/mixed/write-only clients. In addition, adding few basic latency statistics. https://reviews.apache.org/r/25217/ Just in case it'll help someone, here is an example of how to run generate load system test: 1. Checkout zookeeper-trunk 2. Go to zookeeper-trunk, run ant jar compile-test 3. Go to zookeeper-trunk\src\contrib\fatjar, run ant jar 4. Copy zookeeper-dev-fatjar.jar from zookeeper-trunk\build\contrib\fatjar to each of the machines you wish to use. 5. On each server, assuming that you've created a valid ZK config file (e.g., zk.cfg) and a dataDir, run: 5.1 java -jar zookeeper-dev-fatjar.jar server ./zk.cfg 5.2 java -jar zookeeper-dev-fatjar.jar ic name of this server:its client port name of this server:its client port /sysTest 6. And finally, in order to run the test (from some machine), execute the command: java -jar zookeeper-dev-fatjar.jar generateLoad name of one of the servers:its client port /sysTest number of servers number of read-only clients number of mixed workload clients number of write-only clients Note that /sysTest is the same name that we used in 5.2. You'll see Preferred List is empty message, and after few seconds you should get notifications of Accepted connection from Socket[. Afterwards, just set the percentage of the mixed workload clients by entering percentage number and the test will start. Some explanation regarding the new output (which is printed every 6 seconds, and is reset every time you enter a new percentage). Interval: interval number time Test info: number of RO clientsxRO number of mixed workload clientsxtheir write percentage%W number of write only clientsxWO, percentiles [0.5, 0.9, 0.95, 0.99] Throughput: current interval throughput | minimum throughput until now average throughput until now maximum throughput until now Read latency: interval [interval's read latency values according to the percentiles], total [read latency values until now, according to the percentiles] Write latency: interval [interval's write latency values according to the percentiles], total [write latency values until now, according to the percentiles] Note that the throughput is requests per second, and latency is in ms. In addition, if you perform a read only test / write only test, you won't see the printout of write / read latency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-823) update ZooKeeper java client to optionally use Netty for connections
[ https://issues.apache.org/jira/browse/ZOOKEEPER-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118003#comment-14118003 ] Stevo Slavic commented on ZOOKEEPER-823: optionally part from issue summary is related to ZOOKEEPER-1681 update ZooKeeper java client to optionally use Netty for connections Key: ZOOKEEPER-823 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-823 Project: ZooKeeper Issue Type: New Feature Components: java client Reporter: Patrick Hunt Assignee: Patrick Hunt Fix For: 3.5.1 Attachments: NettyNettySuiteTest.rtf, TEST-org.apache.zookeeper.test.NettyNettySuiteTest.txt.gz, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, ZOOKEEPER-823.patch, testDisconnectedAddAuth_FAILURE, testWatchAutoResetWithPending_FAILURE This jira will port the client side connection code to use netty rather than direct nio. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2026) Startup order in ServerCnxnFactory-ies is wrong
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118007#comment-14118007 ] Rakesh R commented on ZOOKEEPER-2026: - More Details: [zk user mailing thread - registering zookeeperserver jmx mbean | http://qnalist.com/questions/5107991/servercnxnfactory-startup-order-and-registering-zookeeperserver-jmx-mbean] Thanks [~sslavic]. Could you add testcase for this. Startup order in ServerCnxnFactory-ies is wrong --- Key: ZOOKEEPER-2026 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2026 Project: ZooKeeper Issue Type: Bug Components: java client Affects Versions: 3.4.6 Reporter: Stevo Slavic Priority: Minor Attachments: ZOOKEEPER-2026.patch {{NIOServerCnxnFactory}} and {{NettyServerCnxnFactory}} {{startup}} method implementations are binding {{ZooKeeperServer}} too late, so in {{ZooKeeperServer}} in its startup can fail to register appropriate JMX MBean. See [this|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201409.mbox/%3CCAAUywg9-ad3oWfqRWahB9PyBEbg6%2Bd%3DDyj5PAUU7A%3Dm9wRncaw%40mail.gmail.com%3E] post on ZK user mailing list for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2026) Startup order in ServerCnxnFactory-ies is wrong
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated ZOOKEEPER-2026: Fix Version/s: 3.5.1 3.4.7 Startup order in ServerCnxnFactory-ies is wrong --- Key: ZOOKEEPER-2026 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2026 Project: ZooKeeper Issue Type: Bug Components: java client Affects Versions: 3.4.6 Reporter: Stevo Slavic Priority: Minor Fix For: 3.4.7, 3.5.1 Attachments: ZOOKEEPER-2026.patch {{NIOServerCnxnFactory}} and {{NettyServerCnxnFactory}} {{startup}} method implementations are binding {{ZooKeeperServer}} too late, so in {{ZooKeeperServer}} in its startup can fail to register appropriate JMX MBean. See [this|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201409.mbox/%3CCAAUywg9-ad3oWfqRWahB9PyBEbg6%2Bd%3DDyj5PAUU7A%3Dm9wRncaw%40mail.gmail.com%3E] post on ZK user mailing list for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Failed: ZOOKEEPER-2023 PreCommit Build #2307
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2023 Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 320850 lines...] [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 3 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 core tests. The patch failed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] ac68e1e8cc1bae740afa2bdacbcf0618398f1dd7 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1713: exec returned: 1 Total time: 38 minutes 33 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Sending artifact delta relative to PreCommit-ZOOKEEPER-Build #2179 Archived 7 artifacts Archive block size is 32768 Received 0 blocks and 547197 bytes Compression is 0.0% Took 2.9 sec Recording test results Description set: ZOOKEEPER-2023 Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 1 tests failed. FAILED: org.apache.zookeeper.test.NioNettySuiteHammerTest.testHammer Error Message: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit. Stack Trace: junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.
[jira] [Commented] (ZOOKEEPER-2023) Improved system test
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118013#comment-14118013 ] Hadoop QA commented on ZOOKEEPER-2023: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665864/ZOOKEEPER-2023.patch against trunk revision 1621313. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2307//console This message is automatically generated. Improved system test Key: ZOOKEEPER-2023 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2023 Project: ZooKeeper Issue Type: Test Components: contrib-fatjar Affects Versions: 3.5.0 Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Priority: Minor Attachments: ZOOKEEPER-2023.patch Adding the ability to perform a system test of mixed workloads using read-only/mixed/write-only clients. In addition, adding few basic latency statistics. https://reviews.apache.org/r/25217/ Just in case it'll help someone, here is an example of how to run generate load system test: 1. Checkout zookeeper-trunk 2. Go to zookeeper-trunk, run ant jar compile-test 3. Go to zookeeper-trunk\src\contrib\fatjar, run ant jar 4. Copy zookeeper-dev-fatjar.jar from zookeeper-trunk\build\contrib\fatjar to each of the machines you wish to use. 5. On each server, assuming that you've created a valid ZK config file (e.g., zk.cfg) and a dataDir, run: 5.1 java -jar zookeeper-dev-fatjar.jar server ./zk.cfg 5.2 java -jar zookeeper-dev-fatjar.jar ic name of this server:its client port name of this server:its client port /sysTest 6. And finally, in order to run the test (from some machine), execute the command: java -jar zookeeper-dev-fatjar.jar generateLoad name of one of the servers:its client port /sysTest number of servers number of read-only clients number of mixed workload clients number of write-only clients Note that /sysTest is the same name that we used in 5.2. You'll see Preferred List is empty message, and after few seconds you should get notifications of Accepted connection from Socket[. Afterwards, just set the percentage of the mixed workload clients by entering percentage number and the test will start. Some explanation regarding the new output (which is printed every 6 seconds, and is reset every time you enter a new percentage). Interval: interval number time Test info: number of RO clientsxRO number of mixed workload clientsxtheir write percentage%W number of write only clientsxWO, percentiles [0.5, 0.9, 0.95, 0.99] Throughput: current interval throughput | minimum throughput until now average throughput until now maximum throughput until now Read latency: interval [interval's read latency values according to the percentiles], total [read latency values until now, according to the percentiles] Write latency: interval [interval's write latency values according to the percentiles], total [write latency values until now, according to the percentiles] Note that the throughput is requests per second, and latency is in ms. In addition, if you perform a read only test / write only test, you won't see the printout of write / read latency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Failed: ZOOKEEPER-2026 PreCommit Build #2308
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2026 Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 308723 lines...] [exec] [exec] -1 tests included. The patch doesn't appear to include any new or modified tests. [exec] Please justify why no new tests are needed for this patch. [exec] Also please list what manual steps were performed to verify this patch. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 core tests. The patch failed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] 5e1dab66efba3bdd0c5e7a3dddf3ad6fa8a1bc9d logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1713: exec returned: 2 Total time: 39 minutes 36 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Sending artifact delta relative to PreCommit-ZOOKEEPER-Build #2179 Archived 7 artifacts Archive block size is 32768 Received 0 blocks and 547195 bytes Compression is 0.0% Took 1.9 sec Recording test results Description set: ZOOKEEPER-2026 Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 4 tests failed. REGRESSION: org.apache.zookeeper.server.quorum.ReconfigRecoveryTest.testCurrentServersAreObserversInNextConfig Error Message: waiting for server 2 being up Stack Trace: junit.framework.AssertionFailedError: waiting for server 2 being up at org.apache.zookeeper.server.quorum.ReconfigRecoveryTest.testCurrentServersAreObserversInNextConfig(ReconfigRecoveryTest.java:217) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) REGRESSION: org.apache.zookeeper.test.AuthTest.testBadAuthNotifiesWatch Error Message: Address already in use Stack Trace: java.net.BindException: Address already in use at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:684) at org.apache.zookeeper.server.ServerCnxnFactory.createFactory(ServerCnxnFactory.java:133) at org.apache.zookeeper.server.ServerCnxnFactory.createFactory(ServerCnxnFactory.java:126) at org.apache.zookeeper.test.ClientBase.createNewServerInstance(ClientBase.java:366) at org.apache.zookeeper.test.ClientBase.startServer(ClientBase.java:444) at org.apache.zookeeper.test.ClientBase.setUp(ClientBase.java:437) REGRESSION: org.apache.zookeeper.test.BufferSizeTest.testCreatesReqs Error Message: Address already in use Stack
[jira] [Commented] (ZOOKEEPER-2026) Startup order in ServerCnxnFactory-ies is wrong
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118014#comment-14118014 ] Hadoop QA commented on ZOOKEEPER-2026: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665863/ZOOKEEPER-2026.patch against trunk revision 1621313. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2308//console This message is automatically generated. Startup order in ServerCnxnFactory-ies is wrong --- Key: ZOOKEEPER-2026 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2026 Project: ZooKeeper Issue Type: Bug Components: java client Affects Versions: 3.4.6 Reporter: Stevo Slavic Priority: Minor Fix For: 3.4.7, 3.5.1 Attachments: ZOOKEEPER-2026.patch {{NIOServerCnxnFactory}} and {{NettyServerCnxnFactory}} {{startup}} method implementations are binding {{ZooKeeperServer}} too late, so in {{ZooKeeperServer}} in its startup can fail to register appropriate JMX MBean. See [this|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201409.mbox/%3CCAAUywg9-ad3oWfqRWahB9PyBEbg6%2Bd%3DDyj5PAUU7A%3Dm9wRncaw%40mail.gmail.com%3E] post on ZK user mailing list for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-1681) ZooKeeper 3.4.x can optionally use netty for nio but the pom does not declare the dep as optional
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stevo Slavic updated ZOOKEEPER-1681: Attachment: ZOOKEEPER-1681.patch Attached [^ZOOKEEPER-1681.patch]. This patch makes netty dependency optional and it also upgrades the dependency to the latest release. Patch does not include new tests. Existing tests pass on my machine. ZooKeeper 3.4.x can optionally use netty for nio but the pom does not declare the dep as optional - Key: ZOOKEEPER-1681 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1681 Project: ZooKeeper Issue Type: Improvement Affects Versions: 3.4.0, 3.4.1, 3.4.2, 3.4.4, 3.4.5 Reporter: John Sirois Fix For: 3.5.1 Attachments: ZOOKEEPER-1681.patch For example in [3.4.5|http://search.maven.org/remotecontent?filepath=org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.pom] we see: {code} $ curl -sS http://search.maven.org/remotecontent?filepath=org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.pom | grep -B1 -A4 org.jboss.netty dependency groupIdorg.jboss.netty/groupId artifactIdnetty/artifactId version3.2.2.Final/version scopecompile/scope /dependency {code} As a consumer I can depend on zookeeper with an exclude for org.jboss.netty#netty or I can let my transitive dep resolver pick a winner. This might be fine, except for those who might be using a more modern netty published under the newish io.netty groupId. With this twist you get both org.jboss.netty#netty;foo and io.netty#netty;bar on your classpath and runtime errors ensue from incompatibilities. unless you add an exclude against zookeeper (and clearly don't enable the zk netty nio handling.) I propose that this is a pom bug although this is debatable. Clearly as currently packaged zookeeper needs netty to compile, but I'd argue since it does not need netty to run, either the scope should be provided or optional or a zookeeper-netty lib should be broken out as an optional dependency and this new dep published by zookeeper can have a proper compile dependency on netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
ZooKeeper-trunk-ibm6 - Build # 603 - Failure
See https://builds.apache.org/job/ZooKeeper-trunk-ibm6/603/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 301080 lines...] [junit] 2014-09-02 09:53:05,956 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:37800 [junit] 2014-09-02 09:53:05,964 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from /127.0.0.1:37800 [junit] 2014-09-02 09:53:05,964 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output [junit] 2014-09-02 09:53:05,965 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client /127.0.0.1:37800 (no session established for client) [junit] 2014-09-02 09:53:05,968 [myid:] - INFO [main:JMXEnv@224] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2014-09-02 09:53:05,972 [myid:] - INFO [main:JMXEnv@241] - expect:InMemoryDataTree [junit] 2014-09-02 09:53:05,972 [myid:] - INFO [main:JMXEnv@245] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree [junit] 2014-09-02 09:53:05,972 [myid:] - INFO [main:JMXEnv@241] - expect:StandaloneServer_port [junit] 2014-09-02 09:53:05,973 [myid:] - INFO [main:JMXEnv@245] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port-1 [junit] 2014-09-02 09:53:05,973 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 6134 [junit] 2014-09-02 09:53:05,974 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 40 [junit] 2014-09-02 09:53:05,974 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota [junit] 2014-09-02 09:53:05,974 [myid:] - INFO [main:ClientBase@520] - tearDown starting [junit] 2014-09-02 09:53:06,004 [myid:] - INFO [SessionTracker:SessionTrackerImpl@157] - SessionTrackerImpl exited loop! [junit] 2014-09-02 09:53:06,004 [myid:] - INFO [SessionTracker:SessionTrackerImpl@157] - SessionTrackerImpl exited loop! [junit] 2014-09-02 09:53:07,337 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1093] - Opening socket connection to server 127.0.0.1/127.0.0.1:11221. Will not attempt to authenticate using SASL (java.lang.SecurityException: Unable to locate a login configuration) [junit] 2014-09-02 09:53:07,338 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@963] - Socket connection established to 127.0.0.1/127.0.0.1:11221, initiating session [junit] 2014-09-02 09:53:07,340 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:37808 [junit] 2014-09-02 09:53:07,342 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@877] - Client attempting to renew session 0x14835c82e91 at /127.0.0.1:37808 [junit] 2014-09-02 09:53:07,345 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@619] - Established session 0x14835c82e91 with negotiated timeout 3 for client /127.0.0.1:37808 [junit] 2014-09-02 09:53:07,355 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1346] - Session establishment complete on server 127.0.0.1/127.0.0.1:11221, sessionid = 0x14835c82e91, negotiated timeout = 3 [junit] 2014-09-02 09:53:07,364 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@676] - Processed session termination for sessionid: 0x14835c82e91 [junit] 2014-09-02 09:53:07,368 [myid:] - INFO [SyncThread:0:FileTxnLog@200] - Creating new log file: log.c [junit] 2014-09-02 09:53:07,377 [myid:] - INFO [NIOWorkerThread-5:MBeanRegistry@119] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=Connections,name2=127.0.0.1,name3=0x14835c82e91] [junit] 2014-09-02 09:53:07,378 [myid:] - INFO [NIOWorkerThread-5:NIOServerCnxn@1006] - Closed socket connection for client /127.0.0.1:37808 which had sessionid 0x14835c82e91 [junit] 2014-09-02 09:53:07,380 [myid:] - INFO [main:ZooKeeper@968] - Session: 0x14835c82e91 closed [junit] 2014-09-02 09:53:07,381 [myid:] - INFO [main:ClientBase@490] - STOPPING server [junit] 2014-09-02 09:53:07,381 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted [junit] 2014-09-02 09:53:07,383 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method [junit] 2014-09-02 09:53:07,384 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread
[jira] [Updated] (ZOOKEEPER-1962) Add a CLI command to recursively list a znode and children
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gautam Gopalakrishnan updated ZOOKEEPER-1962: - Attachment: (was: ZOOKEEPER-1962.diff) Add a CLI command to recursively list a znode and children -- Key: ZOOKEEPER-1962 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1962 Project: ZooKeeper Issue Type: New Feature Components: java client Affects Versions: 3.4.6 Reporter: Gautam Gopalakrishnan Assignee: Gautam Gopalakrishnan Priority: Minor Fix For: 3.5.1 Attachments: ZOOKEEPER-1962.diff Original Estimate: 24h Remaining Estimate: 24h When troubleshooting applications where znodes can be multiple levels deep (eg. HBase replication), it is handy to see all child znodes recursively rather than run an ls for each node manually. So I propose adding an option to the ls command (-r) which will list all child nodes under a given znode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-1962) Add a CLI command to recursively list a znode and children
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gautam Gopalakrishnan updated ZOOKEEPER-1962: - Attachment: ZOOKEEPER-1962.diff Add a CLI command to recursively list a znode and children -- Key: ZOOKEEPER-1962 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1962 Project: ZooKeeper Issue Type: New Feature Components: java client Affects Versions: 3.4.6 Reporter: Gautam Gopalakrishnan Assignee: Gautam Gopalakrishnan Priority: Minor Fix For: 3.5.1 Attachments: ZOOKEEPER-1962.diff Original Estimate: 24h Remaining Estimate: 24h When troubleshooting applications where znodes can be multiple levels deep (eg. HBase replication), it is handy to see all child znodes recursively rather than run an ls for each node manually. So I propose adding an option to the ls command (-r) which will list all child nodes under a given znode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
ZooKeeper_branch34_openjdk7 - Build # 621 - Still Failing
See https://builds.apache.org/job/ZooKeeper_branch34_openjdk7/621/ ### ## LAST 60 LINES OF THE CONSOLE ### Started by timer Building remotely on H11 (Ubuntu ubuntu) in workspace /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_openjdk7 Updating http://svn.apache.org/repos/asf/zookeeper/branches/branch-3.4 at revision '2014-09-02T10:29:01.359 +' At revision 1621956 no change for http://svn.apache.org/repos/asf/zookeeper/branches/branch-3.4 since the previous build No emails were triggered. [locks-and-latches] Checking to see if we really have the locks [locks-and-latches] Have all the locks, build can start [branch-3.4] $ /home/jenkins/tools/ant/latest/bin/ant -Dtest.output=yes -Dtest.junit.output.format=xml -Djavac.target=1.7 clean test-core-java Error: JAVA_HOME is not defined correctly. We cannot execute /usr/lib/jvm/java-7-openjdk-amd64//bin/java Build step 'Invoke Ant' marked build as failure [locks-and-latches] Releasing all the locks [locks-and-latches] All the locks released Recording test results Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## No tests ran.
ZooKeeper_branch34_jdk7 - Build # 632 - Failure
See https://builds.apache.org/job/ZooKeeper_branch34_jdk7/632/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 180388 lines...] [junit] 2014-09-02 10:26:56,584 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down [junit] 2014-09-02 10:26:56,584 [myid:] - INFO [main:PrepRequestProcessor@761] - Shutting down [junit] 2014-09-02 10:26:56,585 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down [junit] 2014-09-02 10:26:56,585 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! [junit] 2014-09-02 10:26:56,585 [myid:] - INFO [main:FinalRequestProcessor@415] - shutdown of request processor complete [junit] 2014-09-02 10:26:56,586 [myid:] - INFO [main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 [junit] 2014-09-02 10:26:56,586 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] [junit] 2014-09-02 10:26:56,587 [myid:] - INFO [main:ClientBase@443] - STARTING server [junit] 2014-09-02 10:26:56,587 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11221 [junit] 2014-09-02 10:26:56,588 [myid:] - INFO [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11221 [junit] 2014-09-02 10:26:56,588 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11221 [junit] 2014-09-02 10:26:56,589 [myid:] - INFO [main:ZooKeeperServer@162] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test2618311151514149209.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test2618311151514149209.junit.dir/version-2 [junit] 2014-09-02 10:26:56,589 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! [junit] 2014-09-02 10:26:56,601 [myid:] - INFO [main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 [junit] 2014-09-02 10:26:56,601 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:56917 [junit] 2014-09-02 10:26:56,601 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@827] - Processing stat command from /127.0.0.1:56917 [junit] 2014-09-02 10:26:56,616 [myid:] - INFO [Thread-4:NIOServerCnxn$StatCommand@663] - Stat command output [junit] 2014-09-02 10:26:56,616 [myid:] - INFO [Thread-4:NIOServerCnxn@1007] - Closed socket connection for client /127.0.0.1:56917 (no session established for client) [junit] 2014-09-02 10:26:56,617 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2014-09-02 10:26:56,618 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree [junit] 2014-09-02 10:26:56,618 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree [junit] 2014-09-02 10:26:56,619 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port [junit] 2014-09-02 10:26:56,619 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port-1 [junit] 2014-09-02 10:26:56,619 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 10528 [junit] 2014-09-02 10:26:56,620 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 21 [junit] 2014-09-02 10:26:56,620 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota [junit] 2014-09-02 10:26:56,620 [myid:] - INFO [main:ClientBase@520] - tearDown starting [junit] 2014-09-02 10:26:56,906 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x14835e72e38 closed [junit] 2014-09-02 10:26:56,906 [myid:] - INFO [main:ClientBase@490] - STOPPING server [junit] 2014-09-02 10:26:56,908 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@512] - EventThread shut down [junit] 2014-09-02 10:26:56,908 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method [junit] 2014-09-02 10:26:56,908 [myid:] - INFO [main:ZooKeeperServer@441] - shutting down [junit] 2014-09-02 10:26:56,908 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down [junit] 2014-09-02 10:26:56,909 [myid:] - INFO [main:PrepRequestProcessor@761] - Shutting down [junit] 2014-09-02 10:26:56,909 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down [junit] 2014-09-02 10:26:56,909 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! [junit] 2014-09-02 10:26:56,909
[jira] [Commented] (ZOOKEEPER-1681) ZooKeeper 3.4.x can optionally use netty for nio but the pom does not declare the dep as optional
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118081#comment-14118081 ] Hadoop QA commented on ZOOKEEPER-1681: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665883/ZOOKEEPER-1681.patch against trunk revision 1621313. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309//console This message is automatically generated. ZooKeeper 3.4.x can optionally use netty for nio but the pom does not declare the dep as optional - Key: ZOOKEEPER-1681 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1681 Project: ZooKeeper Issue Type: Improvement Affects Versions: 3.4.0, 3.4.1, 3.4.2, 3.4.4, 3.4.5 Reporter: John Sirois Fix For: 3.5.1 Attachments: ZOOKEEPER-1681.patch For example in [3.4.5|http://search.maven.org/remotecontent?filepath=org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.pom] we see: {code} $ curl -sS http://search.maven.org/remotecontent?filepath=org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.pom | grep -B1 -A4 org.jboss.netty dependency groupIdorg.jboss.netty/groupId artifactIdnetty/artifactId version3.2.2.Final/version scopecompile/scope /dependency {code} As a consumer I can depend on zookeeper with an exclude for org.jboss.netty#netty or I can let my transitive dep resolver pick a winner. This might be fine, except for those who might be using a more modern netty published under the newish io.netty groupId. With this twist you get both org.jboss.netty#netty;foo and io.netty#netty;bar on your classpath and runtime errors ensue from incompatibilities. unless you add an exclude against zookeeper (and clearly don't enable the zk netty nio handling.) I propose that this is a pom bug although this is debatable. Clearly as currently packaged zookeeper needs netty to compile, but I'd argue since it does not need netty to run, either the scope should be provided or optional or a zookeeper-netty lib should be broken out as an optional dependency and this new dep published by zookeeper can have a proper compile dependency on netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Failed: ZOOKEEPER-1681 PreCommit Build #2309
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-1681 Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 307224 lines...] [exec] [exec] -1 tests included. The patch doesn't appear to include any new or modified tests. [exec] Please justify why no new tests are needed for this patch. [exec] Also please list what manual steps were performed to verify this patch. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 core tests. The patch failed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2309//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] d18ab33b80eb97fc456dd5383a6f7b1d8b98 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1713: exec returned: 2 Total time: 38 minutes 25 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Sending artifact delta relative to PreCommit-ZOOKEEPER-Build #2179 Archived 7 artifacts Archive block size is 32768 Received 0 blocks and 547561 bytes Compression is 0.0% Took 2.6 sec Recording test results Description set: ZOOKEEPER-1681 Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 1 tests failed. FAILED: org.apache.zookeeper.test.NioNettySuiteHammerTest.testHammer Error Message: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit. Stack Trace: junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.
Failed: ZOOKEEPER-1962 PreCommit Build #2310
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-1962 Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 318503 lines...] [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 3 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 core tests. The patch failed core unit tests. [exec] [exec] +1 contrib tests. The patch passed contrib unit tests. [exec] [exec] Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310//testReport/ [exec] Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html [exec] Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310//console [exec] [exec] This message is automatically generated. [exec] [exec] [exec] == [exec] == [exec] Adding comment to Jira. [exec] == [exec] == [exec] [exec] [exec] Comment added. [exec] 4248e2cf72265e8ad26e1a53a8c6967254bf8881 logged out [exec] [exec] [exec] == [exec] == [exec] Finished build. [exec] == [exec] == [exec] [exec] BUILD FAILED /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1713: exec returned: 1 Total time: 38 minutes 16 seconds Build step 'Execute shell' marked build as failure Archiving artifacts Sending artifact delta relative to PreCommit-ZOOKEEPER-Build #2179 Archived 7 artifacts Archive block size is 32768 Received 0 blocks and 547193 bytes Compression is 0.0% Took 0.74 sec Recording test results Description set: ZOOKEEPER-1962 Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 2 tests failed. REGRESSION: org.apache.zookeeper.test.ReconfigTest.testPortChange Error Message: expected:test[1] but was:test[0] Stack Trace: junit.framework.AssertionFailedError: expected:test[1] but was:test[0] at org.apache.zookeeper.test.ReconfigTest.testNormalOperation(ReconfigTest.java:151) at org.apache.zookeeper.test.ReconfigTest.testPortChange(ReconfigTest.java:600) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) FAILED: org.apache.zookeeper.test.NioNettySuiteHammerTest.testHammer Error Message: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit. Stack Trace: junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.
[jira] [Commented] (ZOOKEEPER-1962) Add a CLI command to recursively list a znode and children
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118087#comment-14118087 ] Hadoop QA commented on ZOOKEEPER-1962: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665886/ZOOKEEPER-1962.diff against trunk revision 1621313. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2310//console This message is automatically generated. Add a CLI command to recursively list a znode and children -- Key: ZOOKEEPER-1962 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1962 Project: ZooKeeper Issue Type: New Feature Components: java client Affects Versions: 3.4.6 Reporter: Gautam Gopalakrishnan Assignee: Gautam Gopalakrishnan Priority: Minor Fix For: 3.5.1 Attachments: ZOOKEEPER-1962.diff Original Estimate: 24h Remaining Estimate: 24h When troubleshooting applications where znodes can be multiple levels deep (eg. HBase replication), it is handy to see all child znodes recursively rather than run an ls for each node manually. So I propose adding an option to the ls command (-r) which will list all child nodes under a given znode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
ZooKeeper_branch35_jdk7 - Build # 31 - Failure
See https://builds.apache.org/job/ZooKeeper_branch35_jdk7/31/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 325832 lines...] [junit] 2014-09-02 11:05:44,954 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11221 [junit] 2014-09-02 11:05:44,954 [myid:] - INFO [main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 kB direct buffers. [junit] 2014-09-02 11:05:44,955 [myid:] - INFO [main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221 [junit] 2014-09-02 11:05:44,955 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11221 [junit] 2014-09-02 11:05:44,956 [myid:] - INFO [main:ZooKeeperServer@781] - minSessionTimeout set to 6000 [junit] 2014-09-02 11:05:44,956 [myid:] - INFO [main:ZooKeeperServer@790] - maxSessionTimeout set to 6 [junit] 2014-09-02 11:05:44,956 [myid:] - INFO [main:ZooKeeperServer@152] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 6 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test2959686863559394038.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test2959686863559394038.junit.dir/version-2 [junit] 2014-09-02 11:05:44,957 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test2959686863559394038.junit.dir/version-2/snapshot.b [junit] 2014-09-02 11:05:44,960 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0xb to /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test2959686863559394038.junit.dir/version-2/snapshot.b [junit] 2014-09-02 11:05:44,962 [myid:] - INFO [main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 [junit] 2014-09-02 11:05:44,962 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:38676 [junit] 2014-09-02 11:05:44,963 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from /127.0.0.1:38676 [junit] 2014-09-02 11:05:44,963 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output [junit] 2014-09-02 11:05:44,964 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client /127.0.0.1:38676 (no session established for client) [junit] 2014-09-02 11:05:44,964 [myid:] - INFO [main:JMXEnv@224] - ensureParent:[InMemoryDataTree, StandaloneServer_port] [junit] 2014-09-02 11:05:44,966 [myid:] - INFO [main:JMXEnv@241] - expect:InMemoryDataTree [junit] 2014-09-02 11:05:44,966 [myid:] - INFO [main:JMXEnv@245] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree [junit] 2014-09-02 11:05:44,966 [myid:] - INFO [main:JMXEnv@241] - expect:StandaloneServer_port [junit] 2014-09-02 11:05:44,967 [myid:] - INFO [main:JMXEnv@245] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port-1 [junit] 2014-09-02 11:05:44,967 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 18311 [junit] 2014-09-02 11:05:44,967 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24 [junit] 2014-09-02 11:05:44,967 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota [junit] 2014-09-02 11:05:44,968 [myid:] - INFO [main:ClientBase@520] - tearDown starting [junit] 2014-09-02 11:05:45,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@157] - SessionTrackerImpl exited loop! [junit] 2014-09-02 11:05:45,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@157] - SessionTrackerImpl exited loop! [junit] 2014-09-02 11:05:45,029 [myid:] - INFO [main:ZooKeeper@968] - Session: 0x148360ab620 closed [junit] 2014-09-02 11:05:45,029 [myid:] - INFO [main:ClientBase@490] - STOPPING server [junit] 2014-09-02 11:05:45,029 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down [junit] 2014-09-02 11:05:45,052 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method [junit] 2014-09-02 11:05:45,053 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method [junit] 2014-09-02 11:05:45,053 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread
ZooKeeper-trunk-openjdk7 - Build # 557 - Still Failing
See https://builds.apache.org/job/ZooKeeper-trunk-openjdk7/557/ ### ## LAST 60 LINES OF THE CONSOLE ### Started by timer Building remotely on H3 (Mapreduce Hadoop Zookeeper ubuntu Hdfs) in workspace /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7 Updating http://svn.apache.org/repos/asf/zookeeper/trunk at revision '2014-09-02T19:52:30.943 +' At revision 1622107 no change for http://svn.apache.org/repos/asf/zookeeper/trunk since the previous build No emails were triggered. [locks-and-latches] Checking to see if we really have the locks [locks-and-latches] Have all the locks, build can start [trunk] $ /home/jenkins/tools/ant/latest/bin/ant -Dtest.output=yes -Dtest.junit.output.format=xml -Djavac.target=1.7 clean test-core-java Error: JAVA_HOME is not defined correctly. We cannot execute /usr/lib/jvm/java-7-openjdk-amd64//bin/java Build step 'Invoke Ant' marked build as failure [locks-and-latches] Releasing all the locks [locks-and-latches] All the locks released Recording test results Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## No tests ran.
Re: Review Request 25160: Major throughput improvement with mixed workloads
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25160/#review52074 --- ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java https://reviews.apache.org/r/25160/#comment90808 what if request is null? - Grant Monroe On Aug. 28, 2014, 6:27 p.m., Kfir Lev-Ari wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25160/ --- (Updated Aug. 28, 2014, 6:27 p.m.) Review request for zookeeper, Raul Gutierrez Segales and Alexander Shraer. Repository: zookeeper Description --- Please see https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Diffs - ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java 1619360 ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorConcurrencyTest.java 1619360 ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorTest.java 1619360 Diff: https://reviews.apache.org/r/25160/diff/ Testing --- The attached unit tests, as well as the system test found in https://issues.apache.org/jira/browse/ZOOKEEPER-2023. Thanks, Kfir Lev-Ari
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118710#comment-14118710 ] Hongchao Deng commented on ZOOKEEPER-2024: -- [~kfirlevari] bq. During that period, the session of that request is stalled, but the other sessions might have requests that do not need to be committed, and therefore can be processed. Agreed with this point. It would be nice to be fixed/improved ASAP. bq. To this end, we add data structures for buffering and managing pending requests of stalled sessions This is my questioned point. It seems to put more load on server and potentially limit scalability. And I doubt if this is necessary. For example, as you mentioned: bq. this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. read requests could be processed directly. Why does it need to buffer other sessions' write requests? So IMO, a buffer-less method seems the best way to handle separate sessions, right? Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118741#comment-14118741 ] Alexander Shraer commented on ZOOKEEPER-2024: - Read requests can only be processed if they're not blocked by a previous write in the same session. So we need to be able to check whether the session is stalled waiting for a write to commit or not. The idea is to move pending requests of such blocked sessions into a separate data structure (where they are indexed in a more convenient way) instead of just keeping them in the usual request queue and having to traverse the queue multiple times. Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118776#comment-14118776 ] Hongchao Deng commented on ZOOKEEPER-2024: -- bq. Read requests can only be processed if they're not blocked by a previous write in the same session. So we need to be able to check whether the session is stalled waiting for a write to commit or not. Now I recall what I meant by second type. This is it. So why not put the problem on client side? Clients can buffer all requests and make use of different sessions to serve without being blocked. The tradeoff here is client side v.s. server side. Isn't pushing the load on client side instead of server side a better method? Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 25160: Major throughput improvement with mixed workloads
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25160/#review52075 --- ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java https://reviews.apache.org/r/25160/#comment90825 I think this would be clearer as if (needCommit(request) || pendingRequests.keySet().contains(request.sessionId)) { addToPending(request); } else { getSessionsRequests(request).addLast(request); } ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java https://reviews.apache.org/r/25160/#comment90826 Should this be continue instead of return? If a single session has reached MAX_OUTSTANDING_READS_PER_SESSION, do we won't to stop processing other sessions? ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java https://reviews.apache.org/r/25160/#comment90810 s/outsanding/outstanding/g - Grant Monroe On Aug. 28, 2014, 6:27 p.m., Kfir Lev-Ari wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25160/ --- (Updated Aug. 28, 2014, 6:27 p.m.) Review request for zookeeper, Raul Gutierrez Segales and Alexander Shraer. Repository: zookeeper Description --- Please see https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Diffs - ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java 1619360 ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorConcurrencyTest.java 1619360 ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorTest.java 1619360 Diff: https://reviews.apache.org/r/25160/diff/ Testing --- The attached unit tests, as well as the system test found in https://issues.apache.org/jira/browse/ZOOKEEPER-2023. Thanks, Kfir Lev-Ari
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118819#comment-14118819 ] Alexander Shraer commented on ZOOKEEPER-2024: - Your suggestion is much more different from the current implementation than Kfir's patch. But I also suspect that blocking on the client side would have a performance impact - imagine a bunch of reads blocked by a write on a client - we could have processed the reads immediately after the commit of the write reached the local server, instead the client will first need to get the commit and only then send the reads to the local server, so this is added latency for all those reads. I suspect that by not keeping the server's pipeline full this method would also have a significant impact on throughput... Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 25160: Major throughput improvement with mixed workloads
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25160/#review52091 --- ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java https://reviews.apache.org/r/25160/#comment90827 This looks controversial. I can see the advantage of fast forwarding pings to keep the client from disconnecting, but this means that pings no longer indicate the health of the write pipeline, i.e. before this patch, in the case of a pending write, pings are blocked. This patch changes that behavior. - Grant Monroe On Aug. 28, 2014, 6:27 p.m., Kfir Lev-Ari wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25160/ --- (Updated Aug. 28, 2014, 6:27 p.m.) Review request for zookeeper, Raul Gutierrez Segales and Alexander Shraer. Repository: zookeeper Description --- Please see https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Diffs - ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java 1619360 ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorConcurrencyTest.java 1619360 ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorTest.java 1619360 Diff: https://reviews.apache.org/r/25160/diff/ Testing --- The attached unit tests, as well as the system test found in https://issues.apache.org/jira/browse/ZOOKEEPER-2023. Thanks, Kfir Lev-Ari
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118904#comment-14118904 ] Hongchao Deng commented on ZOOKEEPER-2024: -- bq. imagine a bunch of reads blocked by a write on a client ... A technique I can think of on client side is buffering all requests and then multiplexing those requests into multiple sessions to achieve non-blocking. Nonetheless my point isn't about how to handle heavy read traffic. It is about putting the load on client side as much as possible instead of server side to achieve scalability. This sounds like throughput tuning for user optimized activity -- If a user know it is heavy read and there would be non conflicted requests, should he just create different sessions for those requests to achieve non-blocking? If ZK can help users to take care of it, my intuition goes directly to client side. Right? Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118981#comment-14118981 ] Alexander Shraer commented on ZOOKEEPER-2024: - I see. Well, ZK provides guarantees per-session, so you won't have any guarantees across sessions. Which may be fine if the operations are really independent. But I think your intuition makes sense. Instead of multiple sessions (where you loose all semantics) one could specify relaxed per-operation or per session semantics. I've actually suggested this in the past here: http://wiki.apache.org/hadoop/ZooKeeper/MountRemoteZookeeper The specific ZK property that requires blocking reads after writes of the same session is prefix/fifo of client requests. So even if the requests are concurrent (i.e., one didn't complete before the other started) ZK guarantees that they will complete in invocation order and that if one fails, the ones invoked after it will fail too. IIRC the feedback I got was that users expect this fifo property (its reflects program order) and relaxing it may not be a good idea. But personally I still think that this can be a good optional feature (either per operation or per session). For example, a client's program: write(v, 1) is invoked write(v, 1) completes write(v, 2) is invoked read(v) is invoked currently c must return 2. but perhaps the user could say read(v, I want sequential consistency without fifo) which could return either 1 or 2. This could improve the throughput further. In any case, something like this is probably out of scope for Kfir's proposal but with Kfir's patch in place it would probably be easier to add in the future. Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14119077#comment-14119077 ] Hongchao Deng commented on ZOOKEEPER-2024: -- Okay. I am feeling more clear what I wanna say. Let me summarize it. h3. Stalling I totally agree that unaffected requests should not block each other. But this seems to be specific user activity optimizations. Even if user doesn't wanna handle it, we can still handle in client code. So why do buffering on server side. The server approach means more workload on server side and limits scalability. h3. Starvation I think it's good to provide options to enable read or write preferred ordering with a default one being sequential (no preference). Since this is [~kfirlevari] proposal, could you clarify the tradeoff / motivation to push it on server side? And welcome to ask me questions if anything confusing I made. Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14119087#comment-14119087 ] Alexander Shraer commented on ZOOKEEPER-2024: - Hongchao, It seems that what you mean by unaffected requests is different from what Kfir means. In his proposal all client's requests within a sessions are dependent, so a write blocks all further reads from the same session. This is the current ZK semantics. IMO your proposal to handle it on the client side will either change semantics or alternatively will be worse, not better, for performance. Major throughput improvement with mixed workloads - Key: ZOOKEEPER-2024 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 Project: ZooKeeper Issue Type: Improvement Components: quorum, server Reporter: Kfir Lev-Ari Assignee: Kfir Lev-Ari Attachments: ZOOKEEPER-2024.patch The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. In order to avoid starvation, our solution prioritizes committed write requests over reads, and enforces fairness among read requests of sessions. Please see the docs: 1) https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing - shows performance results of running system tests on the patched ZK using the patched system test from https://issues.apache.org/jira/browse/ZOOKEEPER-2023. See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Build failed in Jenkins: bookkeeper-trunk #760
See https://builds.apache.org/job/bookkeeper-trunk/760/ -- [...truncated 773 lines...] Running org.apache.bookkeeper.meta.GcLedgersTest Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.232 sec Running org.apache.bookkeeper.bookie.LedgerCacheTest Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.554 sec Running org.apache.bookkeeper.bookie.BookieThreadTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 sec Running org.apache.bookkeeper.bookie.TestSyncThread Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.901 sec Running org.apache.bookkeeper.bookie.IndexCorruptionTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.674 sec Running org.apache.bookkeeper.bookie.CompactionTest Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.517 sec Running org.apache.bookkeeper.bookie.BookieInitializationTest Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.659 sec Running org.apache.bookkeeper.bookie.BookieJournalTest Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.844 sec Running org.apache.bookkeeper.bookie.CreateNewLogTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.339 sec Running org.apache.bookkeeper.bookie.CookieTest Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.947 sec Running org.apache.bookkeeper.bookie.UpgradeTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.567 sec Running org.apache.bookkeeper.bookie.TestLedgerDirsManager Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec Running org.apache.bookkeeper.bookie.EntryLogTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.376 sec Running org.apache.bookkeeper.bookie.BookieShutdownTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.571 sec Running org.apache.bookkeeper.client.TestWatchEnsembleChange Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.814 sec Running org.apache.bookkeeper.client.RoundRobinDistributionScheduleTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.122 sec Running org.apache.bookkeeper.client.BookKeeperCloseTest Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.761 sec Running org.apache.bookkeeper.client.ListLedgersTest Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.771 sec Running org.apache.bookkeeper.client.TestFencing Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.562 sec Running org.apache.bookkeeper.client.BookieWriteLedgerTest Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.048 sec Running org.apache.bookkeeper.client.BookKeeperTest Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.759 sec Running org.apache.bookkeeper.client.TestLedgerFragmentReplication Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.359 sec Running org.apache.bookkeeper.client.LedgerCloseTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.608 sec Running org.apache.bookkeeper.client.BookieRecoveryTest Tests run: 72, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.959 sec Running org.apache.bookkeeper.client.TestReadTimeout Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.541 sec Running org.apache.bookkeeper.client.LedgerRecoveryTest Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.644 sec Running org.apache.bookkeeper.client.TestTryReadLastConfirmed Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.62 sec Running org.apache.bookkeeper.client.TestSpeculativeRead Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.881 sec Running org.apache.bookkeeper.client.TestLedgerChecker Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.888 sec Running org.apache.bookkeeper.client.TestRackawareEnsemblePlacementPolicy Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec Running org.apache.bookkeeper.client.SlowBookieTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.623 sec Running org.apache.bookkeeper.util.TestDiskChecker Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec Running org.apache.bookkeeper.metastore.TestMetaStore Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.277 sec Running org.apache.bookkeeper.test.BookieZKExpireTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.79 sec Running org.apache.bookkeeper.test.ConditionalSetTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.827 sec Running org.apache.bookkeeper.test.ReadOnlyBookieTest Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.403 sec Running org.apache.bookkeeper.test.ConcurrentLedgerTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.485 sec Running