Re: Review Request 25217: Improved system test

2014-09-11 Thread Kfir Lev-Ari

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25217/
---

(Updated Sept. 11, 2014, 8:25 a.m.)


Review request for zookeeper.


Changes
---

I've found a bug in the buckets initialization.. turned out the values above 
1000 were not measured. 


Repository: zookeeper


Description
---

Adding the ability to perform a system test of mixed workloads using 
read-only/mixed/write-only clients. 
In addition, adding few basic latency statistics. see 
https://issues.apache.org/jira/browse/ZOOKEEPER-2023


Diffs (updated)
-

  ./src/java/systest/README.txt 1619360 
  ./src/java/systest/org/apache/zookeeper/test/system/GenerateLoad.java 1619360 

Diff: https://reviews.apache.org/r/25217/diff/


Testing
---


Thanks,

Kfir Lev-Ari



Re: Review Request 25217: Improved system test

2014-09-11 Thread Kfir Lev-Ari

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25217/
---

(Updated Sept. 11, 2014, 8:27 a.m.)


Review request for zookeeper and Alexander Shraer.


Repository: zookeeper


Description
---

Adding the ability to perform a system test of mixed workloads using 
read-only/mixed/write-only clients. 
In addition, adding few basic latency statistics. see 
https://issues.apache.org/jira/browse/ZOOKEEPER-2023


Diffs
-

  ./src/java/systest/README.txt 1619360 
  ./src/java/systest/org/apache/zookeeper/test/system/GenerateLoad.java 1619360 

Diff: https://reviews.apache.org/r/25217/diff/


Testing
---


Thanks,

Kfir Lev-Ari



[jira] [Updated] (ZOOKEEPER-2023) Improved system test

2014-09-11 Thread Kfir Lev-Ari (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kfir Lev-Ari updated ZOOKEEPER-2023:

Attachment: ZOOKEEPER-2023.patch

I've updated the patch because I've found a bug in the initialization of the 
buckets.

 Improved system test
 

 Key: ZOOKEEPER-2023
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2023
 Project: ZooKeeper
  Issue Type: Test
  Components: contrib-fatjar
Affects Versions: 3.5.0
Reporter: Kfir Lev-Ari
Assignee: Kfir Lev-Ari
Priority: Minor
 Attachments: ZOOKEEPER-2023.patch, ZOOKEEPER-2023.patch


 Adding the ability to perform a system test of mixed workloads using 
 read-only/mixed/write-only clients. 
 In addition, adding few basic latency statistics.
 https://reviews.apache.org/r/25217/
 Just in case it'll help someone, here is an example of how to run generate 
 load system test:
 1. Checkout zookeeper-trunk
 2. Go to zookeeper-trunk, run ant jar compile-test
 3. Go to  zookeeper-trunk\src\contrib\fatjar, run ant jar
 4. Copy zookeeper-dev-fatjar.jar from zookeeper-trunk\build\contrib\fatjar to 
 each of the machines you wish to use.
 5. On each server, assuming that you've created a valid ZK config file (e.g., 
 zk.cfg) and a dataDir, run: 
5.1 java -jar zookeeper-dev-fatjar.jar server ./zk.cfg 
5.2 java -jar zookeeper-dev-fatjar.jar ic name of this server:its 
 client port name of this server:its client port /sysTest 
 6. And finally, in order to run the test (from some machine), execute the 
 command: 
 java -jar zookeeper-dev-fatjar.jar generateLoad name of one of the 
 servers:its client port /sysTest number of servers number of read-only 
 clients number of mixed workload clients number of write-only clients
 Note that /sysTest is the same name that we used in 5.2.
 You'll see Preferred List is empty message, and after few seconds you 
 should get notifications of Accepted connection from Socket[. 
 Afterwards, just set the percentage of the mixed workload clients by entering 
 percentage number and the test will start.
 Some explanation regarding the new output (which is printed every 6 seconds, 
 and is reset every time you enter a new percentage).
 Interval: interval number time
 Test info: number of RO clientsxRO number of mixed workload 
 clientsxtheir write percentage%W number of write only clientsxWO, 
 percentiles [0.5, 0.9, 0.95, 0.99]
 Throughput: current interval throughput | minimum throughput until now 
 average throughput until now maximum throughput until now
 Read latency: interval [interval's read latency values according to the 
 percentiles], total [read latency values until now, according to the 
 percentiles]
 Write latency: interval [interval's write latency values according to the 
 percentiles], total [write latency values until now, according to the 
 percentiles]
 Note that the throughput is requests per second, and latency is in ms. In 
 addition, if you perform a read only test / write only test, you won't see 
 the printout of write / read latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2023) Improved system test

2014-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129807#comment-14129807
 ] 

Hadoop QA commented on ZOOKEEPER-2023:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12667966/ZOOKEEPER-2023.patch
  against trunk revision 1623916.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 2.0.3) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329//console

This message is automatically generated.

 Improved system test
 

 Key: ZOOKEEPER-2023
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2023
 Project: ZooKeeper
  Issue Type: Test
  Components: contrib-fatjar
Affects Versions: 3.5.0
Reporter: Kfir Lev-Ari
Assignee: Kfir Lev-Ari
Priority: Minor
 Attachments: ZOOKEEPER-2023.patch, ZOOKEEPER-2023.patch


 Adding the ability to perform a system test of mixed workloads using 
 read-only/mixed/write-only clients. 
 In addition, adding few basic latency statistics.
 https://reviews.apache.org/r/25217/
 Just in case it'll help someone, here is an example of how to run generate 
 load system test:
 1. Checkout zookeeper-trunk
 2. Go to zookeeper-trunk, run ant jar compile-test
 3. Go to  zookeeper-trunk\src\contrib\fatjar, run ant jar
 4. Copy zookeeper-dev-fatjar.jar from zookeeper-trunk\build\contrib\fatjar to 
 each of the machines you wish to use.
 5. On each server, assuming that you've created a valid ZK config file (e.g., 
 zk.cfg) and a dataDir, run: 
5.1 java -jar zookeeper-dev-fatjar.jar server ./zk.cfg 
5.2 java -jar zookeeper-dev-fatjar.jar ic name of this server:its 
 client port name of this server:its client port /sysTest 
 6. And finally, in order to run the test (from some machine), execute the 
 command: 
 java -jar zookeeper-dev-fatjar.jar generateLoad name of one of the 
 servers:its client port /sysTest number of servers number of read-only 
 clients number of mixed workload clients number of write-only clients
 Note that /sysTest is the same name that we used in 5.2.
 You'll see Preferred List is empty message, and after few seconds you 
 should get notifications of Accepted connection from Socket[. 
 Afterwards, just set the percentage of the mixed workload clients by entering 
 percentage number and the test will start.
 Some explanation regarding the new output (which is printed every 6 seconds, 
 and is reset every time you enter a new percentage).
 Interval: interval number time
 Test info: number of RO clientsxRO number of mixed workload 
 clientsxtheir write percentage%W number of write only clientsxWO, 
 percentiles [0.5, 0.9, 0.95, 0.99]
 Throughput: current interval throughput | minimum throughput until now 
 average throughput until now maximum throughput until now
 Read latency: interval [interval's read latency values according to the 
 percentiles], total [read latency values until now, according to the 
 percentiles]
 Write latency: interval [interval's write latency values according to the 
 percentiles], total [write latency values until now, according to the 
 percentiles]
 Note that the throughput is requests per second, and latency is in ms. In 
 addition, if you perform a read only test / write only test, you won't see 
 the printout of write / read latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Failed: ZOOKEEPER-2023 PreCommit Build #2329

2014-09-11 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2023
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 318546 lines...]
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 3 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 2.0.3) warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] -1 core tests.  The patch failed core unit tests.
 [exec] 
 [exec] +1 contrib tests.  The patch passed contrib unit tests.
 [exec] 
 [exec] Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329//testReport/
 [exec] Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2329//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] ea6edf9d669cbeb5abf9f9d394b24b1a95cb1b8c logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1713:
 exec returned: 1

Total time: 38 minutes 49 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to PreCommit-ZOOKEEPER-Build #2179
Archived 7 artifacts
Archive block size is 32768
Received 0 blocks and 547196 bytes
Compression is 0.0%
Took 2.5 sec
Recording test results
Description set: ZOOKEEPER-2023
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.zookeeper.test.NioNettySuiteHammerTest.testHammer

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.




Re: Review Request 25160: Major throughput improvement with mixed workloads

2014-09-11 Thread Kfir Lev-Ari

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25160/
---

(Updated Sept. 11, 2014, 9:25 a.m.)


Review request for zookeeper, Raul Gutierrez Segales and Alexander Shraer.


Changes
---

Removed unneeded cast from old line 298 ((Map.EntryLong, 
LinkedListRequest), and on the way moved function to match reading order.


Repository: zookeeper


Description
---

Please see https://issues.apache.org/jira/browse/ZOOKEEPER-2024


Diffs (updated)
-

  ./src/java/main/org/apache/zookeeper/server/quorum/CommitProcessor.java 
1619360 
  
./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorConcurrencyTest.java
 1619360 
  ./src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorTest.java 
1619360 

Diff: https://reviews.apache.org/r/25160/diff/


Testing
---

The attached unit tests, as well as the system test found in 
https://issues.apache.org/jira/browse/ZOOKEEPER-2023. 


Thanks,

Kfir Lev-Ari



[jira] [Updated] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads

2014-09-11 Thread Kfir Lev-Ari (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kfir Lev-Ari updated ZOOKEEPER-2024:

Attachment: ZOOKEEPER-2024.patch

 Major throughput improvement with mixed workloads
 -

 Key: ZOOKEEPER-2024
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024
 Project: ZooKeeper
  Issue Type: Improvement
  Components: quorum, server
Reporter: Kfir Lev-Ari
Assignee: Kfir Lev-Ari
 Attachments: ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, 
 ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, 
 ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch


 The patch is applied to the commit processor, and solves two problems:
 1. Stalling - once the commit processor encounters a local write request, it 
 stalls local processing of all sessions until it receives a commit of that 
 request from the leader. 
 In mixed workloads, this severely hampers performance as it does not allow 
 read-only sessions to proceed at faster speed than read-write ones.
 2. Starvation - as long as there are read requests to process, older remote 
 committed write requests are starved. 
 This occurs due to a bug fix 
 (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing 
 of local read requests before handling any committed write. The problem is 
 only manifested under high local read load. 
 Our solution solves these two problems. It improves throughput in mixed 
 workloads (in our tests, by up to 8x), and reduces latency, especially higher 
 percentiles (i.e., slowest requests). 
 The main idea is to separate sessions that inherently need to stall in order 
 to enforce order semantics, from ones that do not need to stall. To this end, 
 we add data structures for buffering and managing pending requests of stalled 
 sessions; these requests are moved out of the critical path to these data 
 structures, allowing continued processing of unaffected sessions. 
 In order to avoid starvation, our solution prioritizes committed write 
 requests over reads, and enforces fairness among read requests of sessions. 
 Please see the docs:  
 1) 
 https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing
  - includes a detailed description of the new commit processor algorithm.
 2) The attached patch implements our solution, and a collection of related 
 unit tests (https://reviews.apache.org/r/25160)
 3) 
 https://docs.google.com/spreadsheets/d/11mmobkIf-0czIyEEwgytwqRme5OH8tmZcb4EBcsMZ_w/edit?usp=sharing
  - new performance results.
 https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing
  - shows (old) performance results of running system tests on the patched ZK 
 using the patched system test from 
 https://issues.apache.org/jira/browse/ZOOKEEPER-2023. 
 See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ZooKeeper-trunk-ibm6 - Build # 612 - Still Failing

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-ibm6/612/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 353076 lines...]
[junit] 2014-09-11 09:35:58,705 [myid:] - INFO  [main:JMXEnv@142] - 
ensureOnly:[]
[junit] 2014-09-11 09:35:58,706 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-09-11 09:35:58,707 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-09-11 09:35:58,708 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 
kB direct buffers.
[junit] 2014-09-11 09:35:58,712 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-09-11 09:35:58,713 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-09-11 09:35:58,713 [myid:] - INFO  [main:ZooKeeperServer@781] 
- minSessionTimeout set to 6000
[junit] 2014-09-11 09:35:58,713 [myid:] - INFO  [main:ZooKeeperServer@790] 
- maxSessionTimeout set to 6
[junit] 2014-09-11 09:35:58,714 [myid:] - INFO  [main:ZooKeeperServer@152] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-ibm6/trunk/build/test/tmp/test6702404513525541972.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-ibm6/trunk/build/test/tmp/test6702404513525541972.junit.dir/version-2
[junit] 2014-09-11 09:35:58,715 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-ibm6/trunk/build/test/tmp/test6702404513525541972.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 09:35:58,717 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-ibm6/trunk/build/test/tmp/test6702404513525541972.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 09:35:58,719 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-09-11 09:35:58,720 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:32980
[junit] 2014-09-11 09:35:58,721 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:32980
[junit] 2014-09-11 09:35:58,721 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-09-11 09:35:58,722 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:32980 (no session established for client)
[junit] 2014-09-11 09:35:58,722 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-09-11 09:35:58,726 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-09-11 09:35:58,726 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-09-11 09:35:58,727 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-09-11 09:35:58,727 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-09-11 09:35:58,728 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 4996
[junit] 2014-09-11 09:35:58,728 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 40
[junit] 2014-09-11 09:35:58,728 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-09-11 09:35:58,729 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-09-11 09:35:58,744 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x1486411c1c0 closed
[junit] 2014-09-11 09:35:58,744 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-09-11 09:35:58,745 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-09-11 09:35:58,745 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-09-11 09:35:58,745 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 09:35:58,745 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-09-11 09:35:58,745 [myid:] - INFO  

ZooKeeper_branch34_openjdk7 - Build # 631 - Failure

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper_branch34_openjdk7/631/

###
## LAST 60 LINES OF THE CONSOLE 
###
Started by timer
Building remotely on ubuntu-2 (Ubuntu ubuntu) in workspace 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_openjdk7
Updating http://svn.apache.org/repos/asf/zookeeper/branches/branch-3.4 at 
revision '2014-09-11T10:02:18.755 +'
At revision 1624243
no change for http://svn.apache.org/repos/asf/zookeeper/branches/branch-3.4 
since the previous build
No emails were triggered.
[locks-and-latches] Checking to see if we really have the locks
[locks-and-latches] Have all the locks, build can start
[branch-3.4] $ /home/jenkins/tools/ant/latest/bin/ant -Dtest.output=yes 
-Dtest.junit.output.format=xml -Djavac.target=1.7 clean test-core-java
Error: JAVA_HOME is not defined correctly.
  We cannot execute /usr/lib/jvm/java-7-openjdk-amd64//bin/java
Build step 'Invoke Ant' marked build as failure
[locks-and-latches] Releasing all the locks
[locks-and-latches] All the locks released
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads

2014-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129844#comment-14129844
 ] 

Hadoop QA commented on ZOOKEEPER-2024:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12667975/ZOOKEEPER-2024.patch
  against trunk revision 1623916.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 11 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 2.0.3) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330//console

This message is automatically generated.

 Major throughput improvement with mixed workloads
 -

 Key: ZOOKEEPER-2024
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024
 Project: ZooKeeper
  Issue Type: Improvement
  Components: quorum, server
Reporter: Kfir Lev-Ari
Assignee: Kfir Lev-Ari
 Attachments: ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, 
 ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, 
 ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch


 The patch is applied to the commit processor, and solves two problems:
 1. Stalling - once the commit processor encounters a local write request, it 
 stalls local processing of all sessions until it receives a commit of that 
 request from the leader. 
 In mixed workloads, this severely hampers performance as it does not allow 
 read-only sessions to proceed at faster speed than read-write ones.
 2. Starvation - as long as there are read requests to process, older remote 
 committed write requests are starved. 
 This occurs due to a bug fix 
 (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing 
 of local read requests before handling any committed write. The problem is 
 only manifested under high local read load. 
 Our solution solves these two problems. It improves throughput in mixed 
 workloads (in our tests, by up to 8x), and reduces latency, especially higher 
 percentiles (i.e., slowest requests). 
 The main idea is to separate sessions that inherently need to stall in order 
 to enforce order semantics, from ones that do not need to stall. To this end, 
 we add data structures for buffering and managing pending requests of stalled 
 sessions; these requests are moved out of the critical path to these data 
 structures, allowing continued processing of unaffected sessions. 
 In order to avoid starvation, our solution prioritizes committed write 
 requests over reads, and enforces fairness among read requests of sessions. 
 Please see the docs:  
 1) 
 https://docs.google.com/document/d/1oXJiSt9VqL35hCYQRmFuC63ETd0F_g6uApzocgkFe3Y/edit?usp=sharing
  - includes a detailed description of the new commit processor algorithm.
 2) The attached patch implements our solution, and a collection of related 
 unit tests (https://reviews.apache.org/r/25160)
 3) 
 https://docs.google.com/spreadsheets/d/11mmobkIf-0czIyEEwgytwqRme5OH8tmZcb4EBcsMZ_w/edit?usp=sharing
  - new performance results.
 https://docs.google.com/spreadsheets/d/1vmdfsq4WLr92BQO-CGcualE0KhAtjIu3bCaVwYajLo8/edit?usp=sharing
  - shows (old) performance results of running system tests on the patched ZK 
 using the patched system test from 
 https://issues.apache.org/jira/browse/ZOOKEEPER-2023. 
 See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Failed: ZOOKEEPER-2024 PreCommit Build #2330

2014-09-11 Thread Apache Jenkins Server
Jira: https://issues.apache.org/jira/browse/ZOOKEEPER-2024
Build: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 312657 lines...]
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 11 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 2.0.3) warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] -1 core tests.  The patch failed core unit tests.
 [exec] 
 [exec] +1 contrib tests.  The patch passed contrib unit tests.
 [exec] 
 [exec] Test results: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330//testReport/
 [exec] Findbugs warnings: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
 [exec] Console output: 
https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2330//console
 [exec] 
 [exec] This message is automatically generated.
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Adding comment to Jira.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] Comment added.
 [exec] 13082472a24d60b3adf3f626e8292127b65e824c logged out
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build.xml:1713:
 exec returned: 1

Total time: 39 minutes 8 seconds
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to PreCommit-ZOOKEEPER-Build #2179
Archived 7 artifacts
Archive block size is 32768
Received 0 blocks and 547822 bytes
Compression is 0.0%
Took 2.2 sec
Recording test results
Description set: ZOOKEEPER-2024
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.zookeeper.test.NioNettySuiteHammerTest.testHammer

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.




ZooKeeper_branch35_jdk7 - Build # 47 - Failure

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper_branch35_jdk7/47/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 325047 lines...]
[junit] 2014-09-11 10:40:03,375 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-09-11 10:40:03,375 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 
kB direct buffers.
[junit] 2014-09-11 10:40:03,375 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-09-11 10:40:03,376 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-09-11 10:40:03,376 [myid:] - INFO  [main:ZooKeeperServer@781] 
- minSessionTimeout set to 6000
[junit] 2014-09-11 10:40:03,376 [myid:] - INFO  [main:ZooKeeperServer@790] 
- maxSessionTimeout set to 6
[junit] 2014-09-11 10:40:03,376 [myid:] - INFO  [main:ZooKeeperServer@152] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test5888525223006301088.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test5888525223006301088.junit.dir/version-2
[junit] 2014-09-11 10:40:03,377 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test5888525223006301088.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 10:40:03,380 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk7/branch-3.5/build/test/tmp/test5888525223006301088.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 10:40:03,382 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-09-11 10:40:03,382 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:46048
[junit] 2014-09-11 10:40:03,383 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:46048
[junit] 2014-09-11 10:40:03,383 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-09-11 10:40:03,384 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:46048 (no session established for client)
[junit] 2014-09-11 10:40:03,384 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-09-11 10:40:03,386 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-09-11 10:40:03,386 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-09-11 10:40:03,386 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-09-11 10:40:03,386 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-09-11 10:40:03,387 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 18110
[junit] 2014-09-11 10:40:03,387 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-09-11 10:40:03,387 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-09-11 10:40:03,387 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-09-11 10:40:03,450 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x148644c6c63 closed
[junit] 2014-09-11 10:40:03,450 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-09-11 10:40:03,450 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-09-11 10:40:03,451 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-09-11 10:40:03,451 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-09-11 10:40:03,451 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 10:40:03,451 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 10:40:03,452 [myid:] - INFO  

ZooKeeper-trunk - Build # 2435 - Failure

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk/2435/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 321485 lines...]
[junit] 2014-09-11 11:11:23,093 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test772291667801407298.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 11:11:23,096 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-09-11 11:11:23,096 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:53500
[junit] 2014-09-11 11:11:23,098 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:53500
[junit] 2014-09-11 11:11:23,098 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-09-11 11:11:23,099 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:53500 (no session established for client)
[junit] 2014-09-11 11:11:23,099 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-09-11 11:11:23,101 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-09-11 11:11:23,101 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-09-11 11:11:23,101 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-09-11 11:11:23,101 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-09-11 11:11:23,102 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 83946
[junit] 2014-09-11 11:11:23,102 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-09-11 11:11:23,102 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-09-11 11:11:23,102 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-09-11 11:11:23,140 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x14864691aa3 closed
[junit] 2014-09-11 11:11:23,141 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-09-11 11:11:23,141 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-09-11 11:11:23,141 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-09-11 11:11:23,142 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 11:11:23,141 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 11:11:23,141 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-09-11 11:11:23,143 [myid:] - INFO  [main:ZooKeeperServer@443] 
- shutting down
[junit] 2014-09-11 11:11:23,143 [myid:] - INFO  
[main:SessionTrackerImpl@231] - Shutting down
[junit] 2014-09-11 11:11:23,143 [myid:] - INFO  
[main:PrepRequestProcessor@973] - Shutting down
[junit] 2014-09-11 11:11:23,143 [myid:] - INFO  
[main:SyncRequestProcessor@191] - Shutting down
[junit] 2014-09-11 11:11:23,143 [myid:] - INFO  [ProcessThread(sid:0 
cport:-1)::PrepRequestProcessor@155] - PrepRequestProcessor exited loop!
[junit] 2014-09-11 11:11:23,144 [myid:] - INFO  
[SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited!
[junit] 2014-09-11 11:11:23,145 [myid:] - INFO  
[main:FinalRequestProcessor@476] - shutdown of request processor complete
[junit] 2014-09-11 11:11:23,145 [myid:] - INFO  [main:MBeanRegistry@119] - 
Unregister MBean 
[org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree]
[junit] 2014-09-11 11:11:23,145 [myid:] - INFO  [main:MBeanRegistry@119] - 
Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port-1]
[junit] 2014-09-11 11:11:23,146 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-09-11 11:11:23,147 [myid:] - INFO  [main:JMXEnv@142] - 
ensureOnly:[]
[junit] 2014-09-11 11:11:23,151 [myid:] - INFO  [main:ClientBase@545] - 
fdcount after test is: 46 at start it was 33
[junit] 2014-09-11 11:11:23,152 [myid:] - INFO  [main:ClientBase@547] - 
sleeping for 20 secs
[junit] 2014-09-11 11:11:23,153 

ZooKeeper-trunk-jdk7 - Build # 974 - Failure

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-jdk7/974/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 320844 lines...]
[junit] 2014-09-11 11:17:15,403 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-09-11 11:17:15,403 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-09-11 11:17:15,404 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 
kB direct buffers.
[junit] 2014-09-11 11:17:15,404 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-09-11 11:17:15,404 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-09-11 11:17:15,405 [myid:] - INFO  [main:ZooKeeperServer@781] 
- minSessionTimeout set to 6000
[junit] 2014-09-11 11:17:15,405 [myid:] - INFO  [main:ZooKeeperServer@790] 
- maxSessionTimeout set to 6
[junit] 2014-09-11 11:17:15,405 [myid:] - INFO  [main:ZooKeeperServer@152] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test8741180163767790925.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test8741180163767790925.junit.dir/version-2
[junit] 2014-09-11 11:17:15,406 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test8741180163767790925.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 11:17:15,408 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk7/trunk/build/test/tmp/test8741180163767790925.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 11:17:15,409 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-09-11 11:17:15,410 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:46330
[junit] 2014-09-11 11:17:15,411 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:46330
[junit] 2014-09-11 11:17:15,411 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-09-11 11:17:15,411 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:46330 (no session established for client)
[junit] 2014-09-11 11:17:15,412 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-09-11 11:17:15,413 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-09-11 11:17:15,414 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-09-11 11:17:15,414 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-09-11 11:17:15,414 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-09-11 11:17:15,414 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 18087
[junit] 2014-09-11 11:17:15,414 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-09-11 11:17:15,415 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-09-11 11:17:15,415 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x148646e7b4b closed
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-09-11 11:17:15,484 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 

ZooKeeper-trunk-jdk8 - Build # 137 - Failure

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/ZooKeeper-trunk-jdk8/137/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 337525 lines...]
[junit] 2014-09-11 11:55:30,883 [myid:] - INFO  [main:ClientBase@443] - 
STARTING server
[junit] 2014-09-11 11:55:30,884 [myid:] - INFO  [main:ClientBase@364] - 
CREATING server instance 127.0.0.1:11221
[junit] 2014-09-11 11:55:30,884 [myid:] - INFO  
[main:NIOServerCnxnFactory@670] - Configuring NIO connection handler with 10s 
sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 
kB direct buffers.
[junit] 2014-09-11 11:55:30,884 [myid:] - INFO  
[main:NIOServerCnxnFactory@683] - binding to port 0.0.0.0/0.0.0.0:11221
[junit] 2014-09-11 11:55:30,884 [myid:] - INFO  [main:ClientBase@339] - 
STARTING server instance 127.0.0.1:11221
[junit] 2014-09-11 11:55:30,885 [myid:] - INFO  [main:ZooKeeperServer@781] 
- minSessionTimeout set to 6000
[junit] 2014-09-11 11:55:30,885 [myid:] - INFO  [main:ZooKeeperServer@790] 
- maxSessionTimeout set to 6
[junit] 2014-09-11 11:55:30,885 [myid:] - INFO  [main:ZooKeeperServer@152] 
- Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 
6 datadir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7122365156369491805.junit.dir/version-2
 snapdir 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7122365156369491805.junit.dir/version-2
[junit] 2014-09-11 11:55:30,886 [myid:] - INFO  [main:FileSnap@83] - 
Reading snapshot 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7122365156369491805.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 11:55:30,888 [myid:] - INFO  [main:FileTxnSnapLog@298] - 
Snapshotting: 0xb to 
/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-jdk8/trunk/build/test/tmp/test7122365156369491805.junit.dir/version-2/snapshot.b
[junit] 2014-09-11 11:55:30,889 [myid:] - INFO  
[main:FourLetterWordMain@43] - connecting to 127.0.0.1 11221
[junit] 2014-09-11 11:55:30,890 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296]
 - Accepted socket connection from /127.0.0.1:50224
[junit] 2014-09-11 11:55:30,891 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@835] - Processing stat command from 
/127.0.0.1:50224
[junit] 2014-09-11 11:55:30,891 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn$StatCommand@684] - Stat command output
[junit] 2014-09-11 11:55:30,891 [myid:] - INFO  
[NIOWorkerThread-1:NIOServerCnxn@1006] - Closed socket connection for client 
/127.0.0.1:50224 (no session established for client)
[junit] 2014-09-11 11:55:30,891 [myid:] - INFO  [main:JMXEnv@224] - 
ensureParent:[InMemoryDataTree, StandaloneServer_port]
[junit] 2014-09-11 11:55:30,893 [myid:] - INFO  [main:JMXEnv@241] - 
expect:InMemoryDataTree
[junit] 2014-09-11 11:55:30,893 [myid:] - INFO  [main:JMXEnv@245] - 
found:InMemoryDataTree 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=InMemoryDataTree
[junit] 2014-09-11 11:55:30,893 [myid:] - INFO  [main:JMXEnv@241] - 
expect:StandaloneServer_port
[junit] 2014-09-11 11:55:30,893 [myid:] - INFO  [main:JMXEnv@245] - 
found:StandaloneServer_port 
org.apache.ZooKeeperService:name0=StandaloneServer_port-1
[junit] 2014-09-11 11:55:30,893 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 62020
[junit] 2014-09-11 11:55:30,894 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 24
[junit] 2014-09-11 11:55:30,894 [myid:] - INFO  
[main:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testQuota
[junit] 2014-09-11 11:55:30,894 [myid:] - INFO  [main:ClientBase@520] - 
tearDown starting
[junit] 2014-09-11 11:55:30,962 [myid:] - INFO  [main:ZooKeeper@968] - 
Session: 0x148649181e2 closed
[junit] 2014-09-11 11:55:30,963 [myid:] - INFO  [main:ClientBase@490] - 
STOPPING server
[junit] 2014-09-11 11:55:30,962 [myid:] - INFO  
[main-EventThread:ClientCnxn$EventThread@529] - EventThread shut down
[junit] 2014-09-11 11:55:30,963 [myid:] - INFO  
[NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@219]
 - accept thread exitted run method
[junit] 2014-09-11 11:55:30,963 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 11:55:30,963 [myid:] - INFO  
[NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] 
- selector thread exitted run method
[junit] 2014-09-11 11:55:30,963 [myid:] - INFO  
[ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - 
ConnnectionExpirerThread interrupted
[junit] 

Fwd: zookeeper_dashboard help

2014-09-11 Thread Jammy Wolf
F.M. Chen Junmin

-- Forwarded message --
From: Jammy Wolf cjmm...@gmail.com
Date: Tue, Sep 9, 2014 at 7:52 PM
Subject: zookeeper_dashboard help
To: phu...@gmail.com



First of all, I think your zookeeper_dashboard is very friendly and useful.
But a project i encountered needs to support a list of clusters rather than
one. So, i forked your code, then refactored it (
https://github.com/jammyWolf/zookeeper_dashboard.git). I changed the logic
of your code but remains the web design part. My version supports multiple
clusters and acl_control (thanks to kazoo's auth_data params).
Finally, I‘m a beginner in zookeeper, could you help me in reviewing the
code and give me some advises. Thanks!

F.M. jammyWolf


[jira] [Commented] (ZOOKEEPER-900) FLE implementation should be improved to use non-blocking sockets

2014-09-11 Thread Reed Wanderman-Milne (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14130830#comment-14130830
 ] 

Reed Wanderman-Milne commented on ZOOKEEPER-900:


Hi,

I'm wondering if there's any progress on this JIRA. I'm running into an issue 
similar to that of ZOOKEEPER-1678, which can be solved by fixing this. If no 
one is working on it, I'd be happy to take a stab at it.

[~vishalmlst]'s patch added a timeout for connections to other peers, but it 
still seems appears that only one connection can be processed at a time. 
Additionally, in connectOne(long), a lock on the QuorumPeer is held, preventing 
other threads from accessing it. Both this issues seem to contribute to 
ZOOKEEPER-1678. [~vishalmlst] suggested in an earlier comment to move the 
socket operations to SenderWorker and RecvWorker, which would prevent socket 
operations from blocking other connections.

Let me know what your thoughts are. Thanks!

 FLE implementation should be improved to use non-blocking sockets
 -

 Key: ZOOKEEPER-900
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-900
 Project: ZooKeeper
  Issue Type: Bug
Reporter: Vishal Kher
Assignee: Vishal Kher
Priority: Critical
 Fix For: 3.5.1

 Attachments: ZOOKEEPER-900.patch, ZOOKEEPER-900.patch1, 
 ZOOKEEPER-900.patch2


 From earlier email exchanges:
 1. Blocking connects and accepts:
 a) The first problem is in manager.toSend(). This invokes connectOne(), which 
 does a blocking connect. While testing, I changed the code so that 
 connectOne() starts a new thread called AsyncConnct(). AsyncConnect.run() 
 does a socketChannel.connect(). After starting AsyncConnect, connectOne 
 starts a timer. connectOne continues with normal operations if the connection 
 is established before the timer expires, otherwise, when the timer expires it 
 interrupts AsyncConnect() thread and returns. In this way, I can have an 
 upper bound on the amount of time we need to wait for connect to succeed. Of 
 course, this was a quick fix for my testing. Ideally, we should use Selector 
 to do non-blocking connects/accepts. I am planning to do that later once we 
 at least have a quick fix for the problem and consensus from others for the 
 real fix (this problem is big blocker for us). Note that it is OK to do 
 blocking IO in SenderWorker and RecvWorker threads since they block IO to the 
 respective peer.
 b) The blocking IO problem is not just restricted to connectOne(), but also 
 in receiveConnection(). The Listener thread calls receiveConnection() for 
 each incoming connection request. receiveConnection does blocking IO to get 
 peer's info (s.read(msgBuffer)). Worse, it invokes connectOne() back to the 
 peer that had sent the connection request. All of this is happening from the 
 Listener. In short, if a peer fails after initiating a connection, the 
 Listener thread won't be able to accept connections from other peers, because 
 it would be stuck in read() or connetOne(). Also the code has an inherent 
 cycle. initiateConnection() and receiveConnection() will have to be very 
 carefully synchronized otherwise, we could run into deadlocks. This code is 
 going to be difficult to maintain/modify.
 Also see: https://issues.apache.org/jira/browse/ZOOKEEPER-822



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-1295) Documentation for jute.maxbuffer is not correct in ZooKeeper Administrator's Guide

2014-09-11 Thread chendihao (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14130944#comment-14130944
 ] 

chendihao commented on ZOOKEEPER-1295:
--

[~davelatham] I'm also confused about this principle to set both servers an 
clients. What happens if we only set the server side?

 Documentation for jute.maxbuffer is not correct in ZooKeeper Administrator's 
 Guide
 --

 Key: ZOOKEEPER-1295
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1295
 Project: ZooKeeper
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.3.2
Reporter: Daniel Lord
  Labels: newbie

 The jute maxbuffer size is documented as being defaulted to 1 megabyte in the 
 administrators guide.  I believe that this is true server side but it is not 
 true client side.  On the client side the default is (at least in 3.3.2) this:
 packetLen = Integer.getInteger(jute.maxbuffer, 4096 * 1024);
 On the server side the documentation looks to be correct:
 private static int determineMaxBuffer() {
 String maxBufferString = System.getProperty(jute.maxbuffer);
 try {
 return Integer.parseInt(maxBufferString);
 } catch(Exception e) {
 return 0xf;
 }
 
 }
 The documentation states this:
 jute.maxbuffer:
 (Java system property: jute.maxbuffer)
 This option can only be set as a Java system property. There is no zookeeper 
 prefix on it. It specifies the maximum size of the data that can be stored in 
 a znode. The default is 0xf, or just under 1M. If this option is changed, 
 the system property must be set on all servers and clients otherwise problems 
 will arise. This is really a sanity check. ZooKeeper is designed to store 
 data on the order of kilobytes in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2016) Automate client-side rebalancing

2014-09-11 Thread Hongchao Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131098#comment-14131098
 ] 

Hongchao Deng commented on ZOOKEEPER-2016:
--

Hi [~shralex].
I think
{code}
zk.updateServerList(connectString);
{code}
only closes current client connect if it need to drop.

However, in my test, I want to have explicit point after which all clients have 
finished rebalance. Do you know any way to achieve it? I wonder if we can make 
ZooKeeper::updateServerList return only after the client has reconnected. Let 
me know your thoughts. Thanks!



 Automate client-side rebalancing
 

 Key: ZOOKEEPER-2016
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2016
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Hongchao Deng
Assignee: Hongchao Deng
 Attachments: draft-2.patch, draft.patch


 ZOOKEEPER-1355 introduced client-side rebalancing, which is implemented in 
 both the C and Java client libraries. However, it requires the client to 
 detect a configuration change and call updateServerList with the new 
 connection string (see reconfig manual). It may be better if the client just 
 indicates that he is interested in this feature when creating a ZK handle and 
 we'll detect configuration changes and invoke updateServerList for him 
 underneath the hood.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2016) Automate client-side rebalancing

2014-09-11 Thread Alexander Shraer (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131141#comment-14131141
 ] 

Alexander Shraer commented on ZOOKEEPER-2016:
-

I mean in your test, I don't think you should change updateServerList 

 Automate client-side rebalancing
 

 Key: ZOOKEEPER-2016
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2016
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Hongchao Deng
Assignee: Hongchao Deng
 Attachments: draft-2.patch, draft.patch


 ZOOKEEPER-1355 introduced client-side rebalancing, which is implemented in 
 both the C and Java client libraries. However, it requires the client to 
 detect a configuration change and call updateServerList with the new 
 connection string (see reconfig manual). It may be better if the client just 
 indicates that he is interested in this feature when creating a ZK handle and 
 we'll detect configuration changes and invoke updateServerList for him 
 underneath the hood.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-2016) Automate client-side rebalancing

2014-09-11 Thread Alexander Shraer (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131140#comment-14131140
 ] 

Alexander Shraer commented on ZOOKEEPER-2016:
-

Can you wait for the connection established event ? I don't remember the exact 
name of the event.

 Automate client-side rebalancing
 

 Key: ZOOKEEPER-2016
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2016
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Hongchao Deng
Assignee: Hongchao Deng
 Attachments: draft-2.patch, draft.patch


 ZOOKEEPER-1355 introduced client-side rebalancing, which is implemented in 
 both the C and Java client libraries. However, it requires the client to 
 detect a configuration change and call updateServerList with the new 
 connection string (see reconfig manual). It may be better if the client just 
 indicates that he is interested in this feature when creating a ZK handle and 
 we'll detect configuration changes and invoke updateServerList for him 
 underneath the hood.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Sijie Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129688#comment-14129688
 ] 

Sijie Guo commented on BOOKKEEPER-773:
--

I think the thread is kind of being too long. let me recap what I commented 
before (since I didn't see them in the new patch).

(those comments are only one rename cookie part)

* in general, all class variables in Cookie should be final. please use a 
Builder pattern to generate a new Cookie object when modifying it. DONT modify 
the field in-place, which would usually introduce bugs.
* getArgBooleanValue: I would suggest throw exceptions when encountered any bad 
arguments, rather then silent them. 
{code}
+private static boolean getArgBooleanValue(String arg, String option, 
boolean defaultVal) {
+try {
+return Boolean.parseBoolean(arg);
+} catch (NumberFormatException nfe) {
+System.err.println(ERROR: invalid value for option  + option +  
:  + arg);
+return defaultVal;
+}
+}
{code}
* 

 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129694#comment-14129694
 ] 

Rakesh R commented on BOOKKEEPER-773:
-

bq. This tool should focus on real production use case, not just print fancy 
information. 
The idea of printing progress details is to help users if there are many 
ledgers. Otw it would be hard to determine the completion time.

bq.This tool should focus on real production use case, not just print fancy 
information. The current implementation will definitely have huge side effects 
on real production traffic, if you don't control the number of requests issued 
to zookeeper. so -1 on renameBookieIdInLedger
I agree with you. I'd like to retain the progress information. How about adding 
a 'bandwidth' parameter(by default 10 operations) - this will be maximum number 
of ledgers which can be renamed concurrently. If 11th operation comes it will 
wait.

bq. it is much clear and easier for reviewing and get things checked in faster.
Yeah I will split the patches and do cleanup.

 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Sijie Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129701#comment-14129701
 ] 

Sijie Guo commented on BOOKKEEPER-773:
--

{code}
I'd like to retain the progress information.  How about adding a 'bandwidth' 
parameter(by default 10 operations) - this will be maximum number of ledgers 
which can be renamed concurrently. If 11th operation comes it will wait.
{code}

there are actually two concerns on your implementation: 1) you pulled all 
ledgers into a list, which doesn't work if your cluster have a lot of ledgers. 
you might not encounter this issue on hdfs namenode, but we have this issue on 
a real production cluster. so don't pull all ledgers together into a list, use 
ledger iterator. 2) you send all the requests immediately, which will overwhelm 
the system.

'bandwidth' (it is actually 'concurrency') is good for 2). but it doesn't 
resolve 1). if you do want retain the progress information, please provide two 
different implementation, one is using your current solution, while the other 
one is using ledger iterator. so people could choose what to use. although I 
don't suggest to have two implementation on this, it would make code 
maintenance become hard.  

 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129711#comment-14129711
 ] 

Rakesh R commented on BOOKKEEPER-773:
-

bq.you pulled all ledgers into a list, which doesn't work if your cluster have 
a lot of ledgers. 
Actually I pulled all the ledgers to find the total number of ledgers. Is there 
any single API to know the total number of ledgers, otw I've to first iterate 
over all the ledgers and get the total count. Then secondly will get the 
iterator again and then iterate over all the ledgers asynchronously(as I 
mentioned earlier will have bandwidth in place to control the concurrency).
If I understood correctly you are suggesting to go with one-by-one ledger 
rename (sequential execution). FYI - Initially I done the same way, but 
[~iv...@yahoo-inc.com] has given a [comment | 
https://issues.apache.org/jira/browse/BOOKKEEPER-634?focusedCommentId=14010999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14010999]
 to make it concurrent and I also felt it would be useful one.

bq. one is using your current solution, while the other one is using ledger 
iterator. so people could choose what to use. although I don't suggest to have 
two implementation on this, it would make code maintenance become hard.
I also prefer to implement one good solution. Both are not required.



 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129730#comment-14129730
 ] 

Rakesh R commented on BOOKKEEPER-773:
-

bq. otw I've to first iterate over all the ledgers and get the total count. 
Adding one more point - due to concurrent ledger creation/deletion by other 
clients, there are chances of mismatching the total count when iterating over 
the ledger renaming. Since this is not for validation purpose, it can just skip 
if the pending count goes to negative values.

 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Rakesh R (JIRA)
Rakesh R created BOOKKEEPER-782:
---

 Summary: Use builder pattern for Cookie
 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Improvement
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0


It would be good to use builder pattern for Cookie, rather than modifying the 
fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated BOOKKEEPER-782:

Issue Type: Sub-task  (was: Improvement)
Parent: BOOKKEEPER-639

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated BOOKKEEPER-782:

Attachment: BOOKKEEPER-782.patch

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-782.patch


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129852#comment-14129852
 ] 

Rakesh R commented on BOOKKEEPER-782:
-

+Note:+ Cookie#setInstanceId() will return a new cookie object with the new 
instanceid value. Presently cookie is not exposing internals to the outside 
through getters(), I just wanted to continue the same behavior.

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-782.patch


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129862#comment-14129862
 ] 

Hadoop QA commented on BOOKKEEPER-782:
--

Testing JIRA BOOKKEEPER-782


Patch 
[BOOKKEEPER-782.patch|https://issues.apache.org/jira/secure/attachment/12667984/BOOKKEEPER-782.patch]
 downloaded at Thu Sep 11 10:01:42 UTC 2014



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:red}-1{color} the patch does not add/modify any testcase
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
.{color:red}WARNING{color}: the current HEAD has 23 Javadoc warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
{color:red}-1 TESTS{color}
.Tests run: 921
.Tests failed: 1
.Tests errors: 1

.The patch failed the following testcases:

.  testLedgerCheck(org.apache.bookkeeper.client.BookKeeperCloseTest)

{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:red}*-1 Overall result, please check the reported -1(s)*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/bookkeeper-trunk-precommit-build/716/

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-782.patch


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129888#comment-14129888
 ] 

Hadoop QA commented on BOOKKEEPER-782:
--

Testing JIRA BOOKKEEPER-782


Patch 
[BOOKKEEPER-782.patch|https://issues.apache.org/jira/secure/attachment/12667984/BOOKKEEPER-782.patch]
 downloaded at Thu Sep 11 10:37:39 UTC 2014



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:red}-1{color} the patch does not add/modify any testcase
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
.{color:red}WARNING{color}: the current HEAD has 23 Javadoc warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
{color:red}-1 TESTS{color}
.Tests run: 921
.Tests failed: 0
.Tests errors: 1

.The patch failed the following testcases:

.  

{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:red}*-1 Overall result, please check the reported -1(s)*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/bookkeeper-trunk-precommit-build/717/

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-782.patch


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated BOOKKEEPER-773:

Attachment: 006-BOOKKEEPER-773-rename-bookieid.patch

 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch, 
 006-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-773) Provide admin tool to rename bookie identifier in Cookies

2014-09-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129898#comment-14129898
 ] 

Rakesh R commented on BOOKKEEPER-773:
-

Thanks [~hustlmsp] for your time and quick replies.
bq.in general, all class variables in Cookie should be final. please use a 
Builder pattern to generate a new Cookie object when modifying it. DONT modify 
the field in-place, which would usually introduce bugs.
I've raised separate JIRA BOOKKEEPER-782 to modify Cookie.java. Kindly review 
it. Thanks!

bq.it is much clear and easier for reviewing and get things checked in faster.
Attached a patch which only talks about rename cookie. This patch has to be 
applied on top of BOOKKEEPER-782

 Provide admin tool to rename bookie identifier in Cookies
 -

 Key: BOOKKEEPER-773
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-773
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: 001-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 002-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 003-BOOKKEEPER-773-rename-bookieid-in-cookies.patch, 
 004-BOOKKEEPER-773-rename-bookieid.patch, 
 005-BOOKKEEPER-773-rename-bookieid.patch, 
 006-BOOKKEEPER-773-rename-bookieid.patch


 The idea of this JIRA to implement a mechanism to efficiently rename the 
 bookie identifier present in the Cookies. Cookie information will be present 
 in:
 - ledger  journal directories in each Bookie server
 - cookies znode in ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129932#comment-14129932
 ] 

Hadoop QA commented on BOOKKEEPER-782:
--

Testing JIRA BOOKKEEPER-782


Patch 
[BOOKKEEPER-782.patch|https://issues.apache.org/jira/secure/attachment/12667984/BOOKKEEPER-782.patch]
 downloaded at Thu Sep 11 11:21:42 UTC 2014



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:red}-1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:red}-1{color} the patch does not add/modify any testcase
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
.{color:red}WARNING{color}: the current HEAD has 23 Javadoc warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
{color:red}-1 TESTS{color}
.Tests run: 921
.Tests failed: 0
.Tests errors: 1

.The patch failed the following testcases:

.  

{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:red}*-1 Overall result, please check the reported -1(s)*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/bookkeeper-trunk-precommit-build/718/

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-782.patch


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-775) Improve MultipleThreadReadTest to reduce flakiness

2014-09-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129942#comment-14129942
 ] 

Rakesh R commented on BOOKKEEPER-775:
-

[Build-718 
MultipleThreadReadTest#test1Ledger50ThreadsRead|https://builds.apache.org/job/bookkeeper-trunk-precommit-build/718/testReport/org.apache.bookkeeper.test/MultipleThreadReadTest/test1Ledger50ThreadsRead]
 has failures. Is that related to the flakiness discussion here ?

 Improve MultipleThreadReadTest to reduce flakiness
 --

 Key: BOOKKEEPER-775
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-775
 Project: Bookkeeper
  Issue Type: Test
  Components: bookkeeper-server
Reporter: Sijie Guo
Assignee: Sijie Guo
  Labels: test
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-775.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: bookkeeper-trunk #776

2014-09-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/bookkeeper-trunk/776/

--
[...truncated 702 lines...]

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ bookkeeper-stats-api ---
[INFO] Building jar: 
https://builds.apache.org/job/bookkeeper-trunk/ws/bookkeeper-stats/target/bookkeeper-stats-api-4.3.0-SNAPSHOT.jar
[INFO] 
[INFO]  findbugs-maven-plugin:2.5.2:check (default-cli) @ 
bookkeeper-stats-api 
[INFO] 
[INFO] --- findbugs-maven-plugin:2.5.2:findbugs (findbugs) @ 
bookkeeper-stats-api ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO]  findbugs-maven-plugin:2.5.2:check (default-cli) @ 
bookkeeper-stats-api 
[INFO] 
[INFO] --- findbugs-maven-plugin:2.5.2:check (default-cli) @ 
bookkeeper-stats-api ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] 
[INFO] Building bookkeeper-server 4.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ bookkeeper-server ---
[INFO] Deleting 
https://builds.apache.org/job/bookkeeper-trunk/ws/bookkeeper-server (includes 
= [dependency-reduced-pom.xml], excludes = [])
[INFO] 
[INFO] --- apache-rat-plugin:0.7:check (default-cli) @ bookkeeper-server ---
[INFO] Exclude: **/DataFormats.java
[INFO] Exclude: **/BookkeeperProtocol.java
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.1:process (default) @ 
bookkeeper-server ---
[INFO] 
[INFO] --- maven-resources-plugin:2.4.3:resources (default-resources) @ 
bookkeeper-server ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.0:compile (default-compile) @ 
bookkeeper-server ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 192 source files to 
https://builds.apache.org/job/bookkeeper-trunk/ws/bookkeeper-server/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.4.3:testResources (default-testResources) @ 
bookkeeper-server ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.0:testCompile (default-testCompile) @ 
bookkeeper-server ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 86 source files to 
https://builds.apache.org/job/bookkeeper-trunk/ws/bookkeeper-server/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.9:test (default-test) @ bookkeeper-server ---
[INFO] Surefire report directory: 
https://builds.apache.org/job/bookkeeper-trunk/ws/bookkeeper-server/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.bookkeeper.proto.TestPerChannelBookieClient
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.384 sec
Running org.apache.bookkeeper.proto.TestBKStats
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec
Running org.apache.bookkeeper.proto.TestDeathwatcher
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.942 sec
Running org.apache.bookkeeper.replication.BookieLedgerIndexTest
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.006 sec
Running org.apache.bookkeeper.replication.TestAutoRecoveryAlongWithBookieServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.672 sec
Running org.apache.bookkeeper.replication.AuditorBookieTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.765 sec
Running org.apache.bookkeeper.replication.AuditorPeriodicCheckTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.253 sec
Running org.apache.bookkeeper.replication.TestLedgerUnderreplicationManager
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.523 sec
Running org.apache.bookkeeper.replication.TestReplicationWorker
Tests run: 27, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.548 sec
Running org.apache.bookkeeper.replication.AuditorPeriodicBookieCheckTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.028 sec
Running org.apache.bookkeeper.replication.AuditorRollingRestartTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.336 sec
Running org.apache.bookkeeper.replication.BookieAutoRecoveryTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.134 sec
Running org.apache.bookkeeper.replication.AutoRecoveryMainTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.219 sec
Running 

[jira] [Commented] (BOOKKEEPER-775) Improve MultipleThreadReadTest to reduce flakiness

2014-09-11 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14130171#comment-14130171
 ] 

Flavio Junqueira commented on BOOKKEEPER-775:
-

I'm not sure about the timestamps here. My committed comment above says 00:41, 
while build-718 apparently finished at 21:45. I don't know if the time zone is 
consistent. We probably need to wait for 719.

 Improve MultipleThreadReadTest to reduce flakiness
 --

 Key: BOOKKEEPER-775
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-775
 Project: Bookkeeper
  Issue Type: Test
  Components: bookkeeper-server
Reporter: Sijie Guo
Assignee: Sijie Guo
  Labels: test
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-775.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-775) Improve MultipleThreadReadTest to reduce flakiness

2014-09-11 Thread Sijie Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131083#comment-14131083
 ] 

Sijie Guo commented on BOOKKEEPER-775:
--

[~rakeshr] [~fpj]

it is not the flakiness we discussed here. but it turns out most of the build 
failed due to 'Failed to create a selector.', it means there are too many open 
files. I doubt there might be some leaks or too low fd limit in jenkins. will 
take a look at more.

 Improve MultipleThreadReadTest to reduce flakiness
 --

 Key: BOOKKEEPER-775
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-775
 Project: Bookkeeper
  Issue Type: Test
  Components: bookkeeper-server
Reporter: Sijie Guo
Assignee: Sijie Guo
  Labels: test
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-775.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-782) Use builder pattern for Cookie

2014-09-11 Thread Sijie Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131090#comment-14131090
 ] 

Sijie Guo commented on BOOKKEEPER-782:
--

- since you already have builder, you shouldn't have setInstanceId on Cookie 
itself. the interface should be like below:

{code}

Cookie oldCookie;

CookieBuilder builder = Cookie.newBuilder(oldCookie);
builder.setInstanceId(...);
Cookie newCookie = builder.build();
{code}

- znode version isn't part of a cookie. so it would not be part of builder, 
which it would be a final. since it is kind of a state of a cookie object, each 
time we update or delete the cookie object, the state will be changed. we only 
use builder when we want to modify the fields of a cookie. hence, you don't 
need to change the sigature of writeCookie and deleteCookie.

 Use builder pattern for Cookie
 --

 Key: BOOKKEEPER-782
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-782
 Project: Bookkeeper
  Issue Type: Sub-task
  Components: bookkeeper-server
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-782.patch


 It would be good to use builder pattern for Cookie, rather than modifying the 
 fields in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (BOOKKEEPER-783) Avoid running out of fds in MutlipleThreadReadTest

2014-09-11 Thread Sijie Guo (JIRA)
Sijie Guo created BOOKKEEPER-783:


 Summary: Avoid running out of fds in MutlipleThreadReadTest
 Key: BOOKKEEPER-783
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-783
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-server
Reporter: Sijie Guo
Assignee: Sijie Guo
 Fix For: 4.3.0


{code}
org.jboss.netty.channel.ChannelException: Failed to create a selector.
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.init(AbstractNioSelector.java:100)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorker.init(AbstractNioWorker.java:52)
at 
org.jboss.netty.channel.socket.nio.NioWorker.init(NioWorker.java:45)
at 
org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at 
org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
at 
org.jboss.netty.channel.socket.nio.NioWorkerPool.init(NioWorkerPool.java:39)
at 
org.jboss.netty.channel.socket.nio.NioWorkerPool.init(NioWorkerPool.java:33)
at 
org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.init(NioClientSocketChannelFactory.java:151)
at 
org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.init(NioClientSocketChannelFactory.java:116)
at org.apache.bookkeeper.client.BookKeeper.init(BookKeeper.java:204)
at 
org.apache.bookkeeper.client.BookKeeperTestClient.init(BookKeeperTestClient.java:50)
at 
org.apache.bookkeeper.test.MultipleThreadReadTest.createClients(MultipleThreadReadTest.java:73)
at 
org.apache.bookkeeper.test.MultipleThreadReadTest.multiLedgerMultiThreadRead(MultipleThreadReadTest.java:282)
at 
org.apache.bookkeeper.test.MultipleThreadReadTest.test1Ledger50ThreadsRead(MultipleThreadReadTest.java:326)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
at sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:69)
at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:52)
at 
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at java.nio.channels.Selector.open(Selector.java:209)
at 
org.jboss.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (BOOKKEEPER-783) Avoid running out of fds in MutlipleThreadReadTest

2014-09-11 Thread Sijie Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/BOOKKEEPER-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sijie Guo updated BOOKKEEPER-783:
-
Attachment: BOOKKEEPER-783.patch

test1Ledger50ThreadsRead will create 50 bookkeeper clients, which hence would 
create 50 zookeeper clients, and each bookkeeper client would connect to 6 
bookies, which might be the reason that cause running out of fds.

attached a patch:

- use one single bookkeeper client for reads. which would reduce the number of 
connections spawn for this test case.
- reduce the number of entries writes/reads
- reduce the number of threads for test1Ledger50ThreadsRead

 Avoid running out of fds in MutlipleThreadReadTest
 --

 Key: BOOKKEEPER-783
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-783
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-server
Reporter: Sijie Guo
Assignee: Sijie Guo
  Labels: test
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-783.patch


 {code}
 org.jboss.netty.channel.ChannelException: Failed to create a selector.
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.init(AbstractNioSelector.java:100)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.init(AbstractNioWorker.java:52)
   at 
 org.jboss.netty.channel.socket.nio.NioWorker.init(NioWorker.java:45)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.init(NioWorkerPool.java:39)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.init(NioWorkerPool.java:33)
   at 
 org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.init(NioClientSocketChannelFactory.java:151)
   at 
 org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.init(NioClientSocketChannelFactory.java:116)
   at org.apache.bookkeeper.client.BookKeeper.init(BookKeeper.java:204)
   at 
 org.apache.bookkeeper.client.BookKeeperTestClient.init(BookKeeperTestClient.java:50)
   at 
 org.apache.bookkeeper.test.MultipleThreadReadTest.createClients(MultipleThreadReadTest.java:73)
   at 
 org.apache.bookkeeper.test.MultipleThreadReadTest.multiLedgerMultiThreadRead(MultipleThreadReadTest.java:282)
   at 
 org.apache.bookkeeper.test.MultipleThreadReadTest.test1Ledger50ThreadsRead(MultipleThreadReadTest.java:326)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 Caused by: java.io.IOException: Too many open files
   at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
   at sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:69)
   at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:52)
   at 
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
   at java.nio.channels.Selector.open(Selector.java:209)
   at 
 org.jboss.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BOOKKEEPER-783) Avoid running out of fds in MutlipleThreadReadTest

2014-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/BOOKKEEPER-783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131137#comment-14131137
 ] 

Hadoop QA commented on BOOKKEEPER-783:
--

Testing JIRA BOOKKEEPER-783


Patch 
[BOOKKEEPER-783.patch|https://issues.apache.org/jira/secure/attachment/12668267/BOOKKEEPER-783.patch]
 downloaded at Fri Sep 12 05:11:39 UTC 2014



{color:green}+1 PATCH_APPLIES{color}
{color:green}+1 CLEAN{color}
{color:green}+1 RAW_PATCH_ANALYSIS{color}
.{color:green}+1{color} the patch does not introduce any @author tags
.{color:green}+1{color} the patch does not introduce any tabs
.{color:green}+1{color} the patch does not introduce any trailing spaces
.{color:green}+1{color} the patch does not introduce any line longer than 
120
.{color:green}+1{color} the patch does adds/modifies 1 testcase(s)
{color:green}+1 RAT{color}
.{color:green}+1{color} the patch does not seem to introduce new RAT 
warnings
{color:green}+1 JAVADOC{color}
.{color:green}+1{color} the patch does not seem to introduce new Javadoc 
warnings
.{color:red}WARNING{color}: the current HEAD has 23 Javadoc warning(s)
{color:green}+1 COMPILE{color}
.{color:green}+1{color} HEAD compiles
.{color:green}+1{color} patch compiles
.{color:green}+1{color} the patch does not seem to introduce new javac 
warnings
{color:green}+1 FINDBUGS{color}
.{color:green}+1{color} the patch does not seem to introduce new Findbugs 
warnings
{color:green}+1 TESTS{color}
.Tests run: 921
{color:green}+1 DISTRO{color}
.{color:green}+1{color} distro tarball builds with the patch 


{color:green}*+1 Overall result, good!, no -1s*{color}

{color:red}.   There is at least one warning, please check{color}

The full output of the test-patch run is available at

.   https://builds.apache.org/job/bookkeeper-trunk-precommit-build/719/

 Avoid running out of fds in MutlipleThreadReadTest
 --

 Key: BOOKKEEPER-783
 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-783
 Project: Bookkeeper
  Issue Type: Bug
  Components: bookkeeper-server
Reporter: Sijie Guo
Assignee: Sijie Guo
  Labels: test
 Fix For: 4.3.0

 Attachments: BOOKKEEPER-783.patch


 {code}
 org.jboss.netty.channel.ChannelException: Failed to create a selector.
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioSelector.init(AbstractNioSelector.java:100)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.init(AbstractNioWorker.java:52)
   at 
 org.jboss.netty.channel.socket.nio.NioWorker.init(NioWorker.java:45)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
   at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.init(NioWorkerPool.java:39)
   at 
 org.jboss.netty.channel.socket.nio.NioWorkerPool.init(NioWorkerPool.java:33)
   at 
 org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.init(NioClientSocketChannelFactory.java:151)
   at 
 org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.init(NioClientSocketChannelFactory.java:116)
   at org.apache.bookkeeper.client.BookKeeper.init(BookKeeper.java:204)
   at 
 org.apache.bookkeeper.client.BookKeeperTestClient.init(BookKeeperTestClient.java:50)
   at 
 org.apache.bookkeeper.test.MultipleThreadReadTest.createClients(MultipleThreadReadTest.java:73)
   at 
 org.apache.bookkeeper.test.MultipleThreadReadTest.multiLedgerMultiThreadRead(MultipleThreadReadTest.java:282)
   at 
 org.apache.bookkeeper.test.MultipleThreadReadTest.test1Ledger50ThreadsRead(MultipleThreadReadTest.java:326)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at