Github user clockfly commented on the pull request: https://github.com/apache/storm/pull/268#issuecomment-61393103 High Availability test =============== test scenario: 4 machine A,B,C,D, 4 worker, 1 worker on each machine test case1(STORM-404): on machine A, kill worker. A will create a new worker taking the same port. ------------ expected result: reconnection will succeed. experiment result: other worker will start to reconnect, eventually it succeed. Because A starts a new worker with same port. ``` 2014-11-02T09:31:24.988+0800 b.s.m.n.Client [INFO] Reconnect started for Netty-Client-IDHV22-04/192.168.1.54:6703... [84] 2014-11-02T09:31:25.498+0800 b.s.m.n.Client [INFO] Reconnect started for Netty-Client-IDHV22-04/192.168.1.54:6703... [85] 2014-11-02T09:31:25.498+0800 b.s.m.n.Client [INFO] connection established to a remote host Netty-Client-IDHV22-04/192.168.1.54:6703, [id: 0x54466bab, /192.168.1.51:51336 => IDHV22-04/192.168.1.54:6703] ``` test case2(STORM-404): on machine A, kill worker, then immediately start a process to occupy the port used by the worker, which will force storm to relocate the worker to a new port(or a new machine.) -------------- expected result: reconnection process will fail, becasue storm relocate the worker to a new port. Actual result: First after many reconnecton try, the reconnection is aborted, no exception thrown ``` 2014-11-02T09:31:14.753+0800 b.s.m.n.Client [INFO] Reconnect started for Netty-Client-IDHV22-04/192.168.1.54:6703... [63] 2014-11-02T09:31:18.065+0800 b.s.m.n.Client [INFO] Reconnect started for Netty-Client-IDHV22-04/192.168.1.54:6703... [70] at org.apache.storm.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71) ~[storm-core-0.9.3-rc2-SNAPSHOT.jar:0.9.3-rc2-SNAPSHOT] ... 2014-11-02T09:45:36.209+0800 b.s.m.n.Client [INFO] Waiting for pending batchs to be sent with Netty-Client-IDHV22-04/192.168.1.54:6703..., timeout: 600000ms, pendings: 0 2014-11-02T09:45:36.209+0800 b.s.m.n.Client [INFO] connection is closing, abort reconnecting... ``` Second, a new connection to new worker(with new port, or on another machine) (previous the worker is at IDHV22-04:6703, then relocate to IDHV22-03:6702) ``` 2014-11-02T09:45:36.206+0800 b.s.m.n.Client [INFO] New Netty Client, connect to IDHV22-03, 6702, config: , buffer_size: 5242880 2014-11-02T09:45:36.207+0800 b.s.m.n.Client [INFO] connection established to a remote host Netty-Client-IDHV22-03/192.168.1.53:6702, [id: 0x538fdacb, /192.168.1.51:56047 => IDHV22-03/192.168.1.53:6702] ``` test case3: check the failed message count before and after the worker crash ---------------- expect result: after the worker crash, there will some message loss. After it stablize, the message loss will not increase. Actual result: meet expectation. test case4: check the throughput change before and after the worker crash -------------- expect result: There should be no performance drop. Actual result: When storm start a new worker on same machine, there is no performance drop. Check the first gap in the following image. ![network bandwidth change before and after worker crash](https://issues.apache.org/jira/secure/attachment/12678758/worker-kill-recover3.jpg) When storm start a new worker on different machine. It may impact the parallism. Check the second gap in above picture. Before worker crash, there are 4 worker on 4 machine. After worker crash, there are 3 worker on 4 machine. The parallism drops, so the throughput drops. test case5(STORM-510): when a target worker crash, the message sending to other workers should not be blocked. expect result: One connection should not block another in the case of worker crash. Actual result: In the code, the blocking logic is removed. So, one connection will not block another connection. However, in the transition period of failure, because there will be many message loss to the crashed worker, the max.spout.pending flow control may kicks in, the spout message sending speed will be slower. And overall the max throughput will be smaller. After the transition, it goes back to normal. In my test, the transition peroid is around 40second.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---