[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637062#comment-17637062 ] xinqiu.hu commented on HADOOP-18536: Thanks for your patience! > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] howzi commented on pull request #5147: HDFS-16848. RBF: Improve StateStoreZooKeeperImpl performance
howzi commented on PR #5147: URL: https://github.com/apache/hadoop/pull/5147#issuecomment-1323243368 > > Actually it is an obvious performance problem, it takes over 3 mins to refresh the state store cache in our enviroment. Different deployment of ZK may cause a diffrent choice. For example, we have a exclusive ZK cluster for router, it's not a big problem for us. > > Anyway, I realized this is not a good feature for everyone, I will change it to an optional configuration, that must be better. > > Can we provide some information to explain this problem? it will help to understand this change better. > > Thank you very much for helping to review this pr! @ZanderXu We have thousands of mount points and > > Actually it is an obvious performance problem, it takes over 3 mins to refresh the state store cache in our enviroment. Different deployment of ZK may cause a diffrent choice. For example, we have a exclusive ZK cluster for router, it's not a big problem for us. > > Anyway, I realized this is not a good feature for everyone, I will change it to an optional configuration, that must be better. > > Can we provide some information to explain this problem? it will help to understand this change better. > > Thank you very much for helping to review this pr! @ZanderXu We have thousands of mount points on zk, and the network delay between zk and router become lager because of multi-region deployment. So we are going to fix it by this way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637046#comment-17637046 ] Shilun Fan edited comment on HADOOP-18536 at 11/22/22 7:42 AM: --- Thank you very much for your contribution, let us wait for the suggestions of other partners. I linked the pr link to jira, which is more convenient to view. was (Author: slfan1989): Thank you very much for your contribution, let us wait for the suggestions of other partners. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637046#comment-17637046 ] Shilun Fan commented on HADOOP-18536: - Thank you very much for your contribution, let us wait for the suggestions of other partners. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] howzi commented on pull request #5147: HDFS-16848. RBF: Improve StateStoreZooKeeperImpl performance
howzi commented on PR #5147: URL: https://github.com/apache/hadoop/pull/5147#issuecomment-1323233078 > @howzi Thanks for your report and this change makes sense. > > 1. How about keeping the sync mode and adding a new async mode? > 2. Can you add one UT to verify the performance improvement of the async mode? Thank you very much for your help! I changed the code according your comment, please check it again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637039#comment-17637039 ] Shilun Fan edited comment on HADOOP-18536 at 11/22/22 7:35 AM: --- Thank you very much for your feedback. Because it is widely used, we should pay more attention to risks. It is difficult to judge whether the modified code and the original code are idempotent, and just to avoid the apply of 1024 bytes in the ResponseBuffer constructor. The code modification I saw is to copy some code directly from other places and put it outside, it is difficult to agree with this modification. Current Hadoop RPC has very good performance, this modification seems to be over-engineered. was (Author: slfan1989): Thank you very much for your feedback. Because it is widely used, we should pay more attention to risks. It is difficult to judge whether the modified code and the original code are idempotent, and just to avoid the application of 1024 bytes in the ResponseBuffer constructor. The code modification I saw is to copy some code directly from other places and put it outside, it is difficult to agree with this modification. Current Hadoop RPC has very good performance, this modification seems to be over-engineered. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637039#comment-17637039 ] Shilun Fan edited comment on HADOOP-18536 at 11/22/22 7:34 AM: --- Thank you very much for your feedback. Because it is widely used, we should pay more attention to risks. It is difficult to judge whether the modified code and the original code are idempotent, and just to avoid the application of 1024 bytes in the ResponseBuffer constructor. The code modification I saw is to copy some code directly from other places and put it outside, it is difficult to agree with this modification. Current Hadoop RPC has very good performance, this modification seems to be over-engineered. was (Author: slfan1989): Thank you very much for your feedback. Because it is widely used, we should pay more attention to risks. It is difficult to judge whether the modified code and the original code are idempotent, and just to avoid the application of 1024 bytes in the ResponseBuffer constructor, part of the code is changed from some Copy to the outside in the function, it is difficult to agree with this modification. Current Hadoop RPC has very good performance, this modification seems to be over-engineered. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637039#comment-17637039 ] Shilun Fan commented on HADOOP-18536: - Thank you very much for your feedback. Because it is widely used, we should pay more attention to risks. It is difficult to judge whether the modified code and the original code are idempotent, and just to avoid the application of 1024 bytes in the ResponseBuffer constructor, part of the code is changed from some Copy to the outside in the function, it is difficult to agree with this modification. Current Hadoop RPC has very good performance, this modification seems to be over-engineered. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637036#comment-17637036 ] xinqiu.hu edited comment on HADOOP-18536 at 11/22/22 7:25 AM: -- Thank you very much for reviewing the code, I agree with the risk you said. But RPC Client is not only used in hdfs client, it is used in almost every component of hadoop, such as Application Master, DataNode, Yarn Node Manager, Container. If each request can reduce the allocation of 1024 bytes, I think there is a certain gain. And this idea didn't come out of nowhere, it came from ipc.Server#setupResponseForProtobuf and ipc.Server#setupResponseForWritable. was (Author: JIRAUSER293999): Thank you very much for reviewing the code, I agree with the risk you said. But RPC Client is not only used in hdfs client, it is used in almost every component of hadoop, such as Application Master, DataNode, Yarn Node Manager, Container. If each request can reduce the allocation of 1024 bytes, I think there is a certain gain. And this idea didn't come out of thin air, it came from ipc.Server#setupResponseForProtobuf and ipc.Server#setupResponseForWritable. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637036#comment-17637036 ] xinqiu.hu commented on HADOOP-18536: Thank you very much for reviewing the code, I agree with the risk you said. But RPC Client is not only used in hdfs client, it is used in almost every component of hadoop, such as Application Master, DataNode, Yarn Node Manager, Container. If each request can reduce the allocation of 1024 bytes, I think there is a certain gain. And this idea didn't come out of thin air, it came from ipc.Server#setupResponseForProtobuf and ipc.Server#setupResponseForWritable. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinqiu.hu updated HADOOP-18536: --- Description: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, reduce one copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 was: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637013#comment-17637013 ] Shilun Fan edited comment on HADOOP-18536 at 11/22/22 6:04 AM: --- Thank you very much for your thoughts. From my personal point of view, I think the risk of this change is high, but the benefit is very small, and it is difficult for us to see performance optimization. For hadoop-client, the default configuration is 512MB of memory, 1024 Bytes copy, basically there will be no Impact, even if our client only has 2-3MB of memory, it has basically no impact. was (Author: slfan1989): >From my personal point of view, I think the risk of this change is high, but >the benefit is very small, and it is difficult for us to see performance >optimization. For hadoop-client, the default configuration is 512MB of memory, >1024 Bytes copy, basically there will be no Impact, even if our client only >has 2-3MB of memory, it has basically no impact. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637013#comment-17637013 ] Shilun Fan commented on HADOOP-18536: - >From my personal point of view, I think the risk of this change is high, but >the benefit is very small, and it is difficult for us to see performance >optimization. For hadoop-client, the default configuration is 512MB of memory, >1024 Bytes copy, basically there will be no Impact, even if our client only >has 2-3MB of memory, it has basically no impact. > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18533) RPC Client performance improvement
[ https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637012#comment-17637012 ] ASF GitHub Bot commented on HADOOP-18533: - huxinqiu closed pull request #5151: HADOOP-18533. RPC Client performance improvement URL: https://github.com/apache/hadoop/pull/5151 > RPC Client performance improvement > -- > > Key: HADOOP-18533 > URL: https://issues.apache.org/jira/browse/HADOOP-18533 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > Labels: pull-request-available > > The current implementation copies the rpcRequest and header to a > ByteArrayOutputStream in order to calculate the total length of the sent > request, and then writes it to the socket buffer. > But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the > request size, and then send the request directly to the socket buffer, > reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer > each time a request is sent. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huxinqiu closed pull request #5151: HADOOP-18533. RPC Client performance improvement
huxinqiu closed pull request #5151: HADOOP-18533. RPC Client performance improvement URL: https://github.com/apache/hadoop/pull/5151 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinqiu.hu updated HADOOP-18536: --- Description: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 was: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, reduce one copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18533) RPC Client performance improvement
[ https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637011#comment-17637011 ] ASF GitHub Bot commented on HADOOP-18533: - hadoop-yetus commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1323121288 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 39s | | trunk passed | | +1 :green_heart: | compile | 25m 27s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 21m 57s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 53s | | trunk passed | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 24m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 24m 43s | | the patch passed | | +1 :green_heart: | compile | 22m 12s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 19s | | hadoop-common-project/hadoop-common: The patch generated 0 new + 185 unchanged - 1 fixed = 185 total (was 186) | | +1 :green_heart: | mvnsite | 1m 50s | | the patch passed | | +1 :green_heart: | javadoc | 1m 15s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 34s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 8s | | The patch does not generate ASF License warnings. | | | | 228m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5151 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 23938719269c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 57eda6889798acb592616de4c920fde477f2e392 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/testReport/ | | Max. process+thread count | 1331 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://y
[GitHub] [hadoop] hadoop-yetus commented on pull request #5151: HADOOP-18533. RPC Client performance improvement
hadoop-yetus commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1323121288 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 39s | | trunk passed | | +1 :green_heart: | compile | 25m 27s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 21m 57s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 53s | | trunk passed | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 24m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 24m 43s | | the patch passed | | +1 :green_heart: | compile | 22m 12s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 19s | | hadoop-common-project/hadoop-common: The patch generated 0 new + 185 unchanged - 1 fixed = 185 total (was 186) | | +1 :green_heart: | mvnsite | 1m 50s | | the patch passed | | +1 :green_heart: | javadoc | 1m 15s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 34s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 8s | | The patch does not generate ASF License warnings. | | | | 228m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5151 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 23938719269c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 57eda6889798acb592616de4c920fde477f2e392 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/testReport/ | | Max. process+thread count | 1331 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/9/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscrib
[jira] [Updated] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinqiu.hu updated HADOOP-18536: --- Description: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, reduce one copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 was: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, Reduce one copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637010#comment-17637010 ] xinqiu.hu commented on HADOOP-18536: HADOOP-18536 is a better approach than HADOOP-18533, with the goal of reducing rpc request copies. HADOOP-18534 is for early release of direct memory > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, Reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637009#comment-17637009 ] Shilun Fan commented on HADOOP-18536: - Is there any relationship between the three jiras HADOOP-18533, HADOOP-18534, and HADOOP-18536? > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinqiu.hu updated HADOOP-18536: --- Description: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, Reduce one copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 was: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # Reduce one request copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, Reduce one copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinqiu.hu updated HADOOP-18536: --- Target Version/s: 3.4.0 Priority: Minor (was: Major) > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18536) RPC Client Improvement
[ https://issues.apache.org/jira/browse/HADOOP-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinqiu.hu updated HADOOP-18536: --- Description: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # Reduce one request copy # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 was: In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 > RPC Client Improvement > -- > > Key: HADOOP-18536 > URL: https://issues.apache.org/jira/browse/HADOOP-18536 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, before a request (including RpcRequestHeaderProto, > RequestHeaderProto, Message Payload) is sent, they will be copied to the three > CodedOutputStream internal byte arrays, and then aggregated to the > ResponseBuffer > internal byte array. Then the ResponseBuffer byte array is written to a > BufferedOutputStream and finally to a SocketOutputStream. > To simplify the writing process, Maybe we can copy them directly to a big > CodedOutputStream and send them directly to the IpcStreams#out. To achieve > this, I > propose HADOOP-18533. But it brings the following two side effects. > # The generic declaration of rpcRequestQueue inside Client has been changed > to Object > # The serialization of protobuf has been moved to rpcRequestThread, because > rpcRequestThread is a single thread for each connection, which may have a > performance impact. > For the above reasons, I propose this. This pr brings the following benefits > # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes > # Reduce one request copy > # For each rpc request, combine the three fragmented CodedOutputStreams into > one > # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18533) RPC Client performance improvement
[ https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17637008#comment-17637008 ] ASF GitHub Bot commented on HADOOP-18533: - huxinqiu commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1323110323 [HADOOP-18536](https://github.com/apache/hadoop/pull/5156) may be more suitable > RPC Client performance improvement > -- > > Key: HADOOP-18533 > URL: https://issues.apache.org/jira/browse/HADOOP-18533 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > Labels: pull-request-available > > The current implementation copies the rpcRequest and header to a > ByteArrayOutputStream in order to calculate the total length of the sent > request, and then writes it to the socket buffer. > But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the > request size, and then send the request directly to the socket buffer, > reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer > each time a request is sent. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huxinqiu commented on pull request #5151: HADOOP-18533. RPC Client performance improvement
huxinqiu commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1323110323 [HADOOP-18536](https://github.com/apache/hadoop/pull/5156) may be more suitable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huxinqiu opened a new pull request, #5156: RPC Client Improvement
huxinqiu opened a new pull request, #5156: URL: https://github.com/apache/hadoop/pull/5156 In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose [HADOOP-18533](https://issues.apache.org/jira/browse/HADOOP-18533). But it brings the following two side effects. 1. The generic declaration of rpcRequestQueue inside Client has been changed to Object 2. The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits 1. For each rpc request, avoid creating a ResponseBuffer of 1024 bytes 2. For each rpc request, combine the three fragmented CodedOutputStreams into one 3. No side effects like [HADOOP-18533](https://issues.apache.org/jira/browse/HADOOP-18533) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18536) RPC Client Improvement
xinqiu.hu created HADOOP-18536: -- Summary: RPC Client Improvement Key: HADOOP-18536 URL: https://issues.apache.org/jira/browse/HADOOP-18536 Project: Hadoop Common Issue Type: Improvement Components: rpc-server Reporter: xinqiu.hu In the RPC Client, before a request (including RpcRequestHeaderProto, RequestHeaderProto, Message Payload) is sent, they will be copied to the three CodedOutputStream internal byte arrays, and then aggregated to the ResponseBuffer internal byte array. Then the ResponseBuffer byte array is written to a BufferedOutputStream and finally to a SocketOutputStream. To simplify the writing process, Maybe we can copy them directly to a big CodedOutputStream and send them directly to the IpcStreams#out. To achieve this, I propose HADOOP-18533. But it brings the following two side effects. # The generic declaration of rpcRequestQueue inside Client has been changed to Object # The serialization of protobuf has been moved to rpcRequestThread, because rpcRequestThread is a single thread for each connection, which may have a performance impact. For the above reasons, I propose this. This pr brings the following benefits # For each rpc request, avoid creating a ResponseBuffer of 1024 bytes # For each rpc request, combine the three fragmented CodedOutputStreams into one # No side effects like HADOOP-18533 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
hadoop-yetus commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1323041154 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 23s | | trunk passed | | +1 :green_heart: | compile | 0m 59s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 52s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 3s | | trunk passed | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 45s | | the patch passed | | +1 :green_heart: | compile | 0m 46s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 46s | | the patch passed | | +1 :green_heart: | compile | 0m 44s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 23s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 51s | | the patch passed | | +1 :green_heart: | javadoc | 0m 45s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 36s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 39m 23s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 153m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5145 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b746bef3ebcf 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 99c79c996bbed3176b9c512760b6dfd61c9854c3 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/6/testReport/ | | Max. process+thread count | 2473 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please
[GitHub] [hadoop] hadoop-yetus commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
hadoop-yetus commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1323037298 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 2m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 13s | | trunk passed | | +1 :green_heart: | compile | 0m 51s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 54s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 14s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 53s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 21s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 38s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 39m 14s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 152m 2s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFederationRename | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5145 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 845fc3f23ea1 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 99c79c996bbed3176b9c512760b6dfd61c9854c3 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/5/testReport/ | | Max. process+thread count | 2124 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This me
[GitHub] [hadoop] hadoop-yetus commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
hadoop-yetus commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1323033683 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 29s | | trunk passed | | +1 :green_heart: | compile | 0m 51s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 48s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 53s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 26s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 40s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 36m 24s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 139m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5145 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1b60745c267f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c8251b56d51d3a999625d177e2d76c77e688d4be | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/7/testReport/ | | Max. process+thread count | 2141 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/7/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please
[GitHub] [hadoop] GuoPhilipse commented on pull request #4602: HDFS-16673. Fix usage of chown
GuoPhilipse commented on PR #4602: URL: https://github.com/apache/hadoop/pull/4602#issuecomment-1322966990 > Did you test it? This behavior is consistent with the local file system. Thanks for your review @tomscut , actually `chown ` command can be used for change owner or group belong info. i used chown to change the file group ,keeping the owner as the before. maybe we can make it clearer ,if we use `chown` to change owner ,super user is a must, if we use `chown` to change the group ,then user either may be the owner of the files or the super user. reproduce steps as the following step1 :create new group `groupadd philipse1` step2:add user to the group `useradd philipse -g philipse1` step3: touch a file(any user) hdfs dfs -touch /tmp/test2/test3 step4:use super user(hadoop is the super user) to change the owner and group of the file to philipse hdfs dfs -chown philipse:philipse /tmp/test2/test3 now we get the file info as the following: `-rw-r--r-- 3 philipse philipse 0 2022-11-22 10:53 /tmp/test2/test3` step5:use owner philipse to change new owner, there will be error,for philipse is not the super user ` hdfs dfs -chown root:philipse /tmp/test2/test3 chown: changing ownership of '/tmp/test2/test3': User philipse is not a super user (non-super user cannot change owner).` step6:use owner philipse to change new group, the command will be ok. `hdfs dfs -chown philipse:philipse1 /tmp/test2/test3` let's see the result,the group has been modifyed by `chown` command. ` hdfs dfs -ls /tmp/test2/test3 -rw-r--r-- 3 philipse philipse1 0 2022-11-22 10:53 /tmp/test2/test3 ` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5155: HDFS-16851: RBF: Add a utility to dump the StateStore.
hadoop-yetus commented on PR #5155: URL: https://github.com/apache/hadoop/pull/5155#issuecomment-1322938837 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 58s | | trunk passed | | +1 :green_heart: | compile | 0m 52s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 53s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 39m 27s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 148m 29s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5155 | | JIRA Issue | HDFS-16851 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux 0e254a6fc59a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f73d03a02c9ce9df616b526e08c8a8ba3385476a | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/3/testReport
[GitHub] [hadoop] haiyang1987 commented on pull request #5129: HDFS-16840. Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB
haiyang1987 commented on PR #5129: URL: https://github.com/apache/hadoop/pull/5129#issuecomment-1322920504 Update PR @ZanderXu @tomscut @tasanuma please help me to reivew it again, Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on a diff in pull request #5129: HDFS-16840. Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB
haiyang1987 commented on code in PR #5129: URL: https://github.com/apache/hadoop/pull/5129#discussion_r1028719428 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java: ## @@ -81,6 +81,8 @@ public class OfflineImageViewerPB { + "changed via the -delimiter argument.\n" + "-sp print storage policy, used by delimiter only.\n" + "-ec print erasure coding policy, used by delimiter only.\n" + + "-m,--multiThread defines multiThread to process sub-sections, \n" Review Comment: ok, i will remove --multiThread -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on a diff in pull request #5129: HDFS-16840. Enhance the usage description about oiv in HDFSCommands.md and OfflineImageViewerPB
haiyang1987 commented on code in PR #5129: URL: https://github.com/apache/hadoop/pull/5129#discussion_r1028717101 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java: ## @@ -81,6 +81,8 @@ public class OfflineImageViewerPB { + "changed via the -delimiter argument.\n" + "-sp print storage policy, used by delimiter only.\n" + "-ec print erasure coding policy, used by delimiter only.\n" + + "-m,--multiThread defines multiThread to process sub-sections, \n" Review Comment: hi @tomscut duplicated options is -m ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
simbadzina commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028715143 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -348,25 +357,28 @@ public boolean putAll( for (Entry entry : toWrite.entrySet()) { String recordPath = entry.getKey(); String recordPathTemp = recordPath + "." + now() + TMP_MARK; - BufferedWriter writer = getWriter(recordPathTemp); + BufferedWriter writer = getBufferedWriter(recordPathTemp); + boolean recordWrittenSuccessfully = true; Review Comment: Fixed. When the writer fails to close, I kept the statement to set recordWrittenSuccessfully to false, since the stream may not have been flushed correctly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4602: HDFS-16673. Fix usage of chown
hadoop-yetus commented on PR #4602: URL: https://github.com/apache/hadoop/pull/4602#issuecomment-1322882225 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 42s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 38s | | trunk passed | | +1 :green_heart: | compile | 25m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 22m 16s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 50s | | trunk passed | | +1 :green_heart: | javadoc | 2m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 53s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 48s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 30s | | the patch passed | | +1 :green_heart: | compile | 24m 45s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 24m 45s | | the patch passed | | +1 :green_heart: | compile | 22m 46s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 46s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 45s | | the patch passed | | +1 :green_heart: | javadoc | 2m 51s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 59s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 39s | | hadoop-common in the patch passed. | | -1 :x: | unit | 398m 12s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4602/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 27s | | The patch does not generate ASF License warnings. | | | | 655m 24s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4602/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4602 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs checkstyle | | uname | Linux 3da7bf7f7396 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9857a3a3414cc67550e9b0d7fbd9f440446989ec | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4602/2/testReport/ | | Max. process+thread count | 2928 (vs. ulimit of 5500) | | modules | C: hadoop-common-pro
[GitHub] [hadoop] tomscut commented on pull request #4209: HDFS-16550. Improper cache-size for journal node may cause cluster crash
tomscut commented on PR #4209: URL: https://github.com/apache/hadoop/pull/4209#issuecomment-1322873539 > I am -1 on the PR as-is. We have publicly exposed the current config `dfs.journalnode.edit-cache-size.bytes`; we can't just rename it and change the behavior now. I also think there is a lot of value in being able to configure the cache size exactly, rather than as a fraction, but I do recognize the value in using a ratio as a helpful default (one less knob to tune). I would propose: > > * _Add_ (not replace) a new config `dfs.journalnode.edit-cache-size.fraction` (or `.ratio`? but either way I think we should maintain the `edit-cache-size` prefix) > * If `edit-cache-size.bytes` is set, use that value. Otherwise, use the value of `edit-cache-size.fraction * Runtime#maxMemory()`, which has a default value set. > * I would suggest 0.5 rather than 0.3 for the default value of `fraction` but am open to discussion there. > > This still does change the default behavior slightly, since before you would get a 1GB cache and now you get `-Xmx * 0.5`, but there is an easy way to preserve the old behavior and if you've explicitly configured the cache size (which you probably did, if you're using the feature) then there is no change. Thank you for your comments and detailed suggestions. It is a good idea to add a new config. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18535) Implement token storage solution based on MySQL
Hector Sandoval Chaverri created HADOOP-18535: - Summary: Implement token storage solution based on MySQL Key: HADOOP-18535 URL: https://issues.apache.org/jira/browse/HADOOP-18535 Project: Hadoop Common Issue Type: Improvement Reporter: Hector Sandoval Chaverri Assignee: Hector Sandoval Chaverri Hadoop RBF supports custom implementations of secret managers. At the moment, the only available implementation is ZKDelegationTokenSecretManagerImpl, which stores tokens and delegation keys in Zookeeper. During our investigation, we found that the performance of routers is limited by the writes to the Zookeeper token store, which impacts requests for token creation, renewal and cancellation. An alternative secret manager implementation will be made available, based on MySQL, to handle a higher number of writes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin
[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17636945#comment-17636945 ] Viraj Jasani commented on HADOOP-18399: --- [~ste...@apache.org], just checking here if you got some bandwidth to resume the review of [https://github.com/apache/hadoop/pull/5054] Thanks > SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin > > > Key: HADOOP-18399 > URL: https://issues.apache.org/jira/browse/HADOOP-18399 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to > allocate a temp file. > it should be using LocalDirAllocator to allocate space from a list of dirs, > taking a config key to use. for s3a we will use the Constants.BUFFER_DIR > option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so > automatically cleaned up on container exit -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] omalley commented on a diff in pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider.
omalley commented on code in PR #5142: URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028642292 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,13 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (conf.getBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, Review Comment: I'd simplify the check to ``` if (alignmentContext == null && conf.getBoolean()) { alignmentContext = new ClientGSIContext(); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] omalley commented on a diff in pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider.
omalley commented on code in PR #5142: URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028642292 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,13 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (conf.getBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, Review Comment: I'd simplify the check to if (alignmentContext == null && conf.getBoolean()) { alignmentContext = new ClientGSIContext(); } -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5155: HDFS-16851: RBF: Add a utility to dump the StateStore.
hadoop-yetus commented on PR #5155: URL: https://github.com/apache/hadoop/pull/5155#issuecomment-1322844291 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 25s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 53s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 0m 40s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 36s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 41m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 148m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate | | | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | | hadoop.hdfs.server.federation.router.TestRouter
[GitHub] [hadoop] omalley commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
omalley commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028638870 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -88,6 +88,15 @@ protected abstract BufferedReader getReader( protected abstract BufferedWriter getWriter( String path); + /** Review Comment: I think that separating this out from getWriter is confusing and unnecessary. Just make getWriter public in all 3 classes. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -88,6 +88,15 @@ protected abstract BufferedReader getReader( protected abstract BufferedWriter getWriter( String path); + /** Review Comment: Although do add the VisibleForTesting annotation. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -348,25 +357,28 @@ public boolean putAll( for (Entry entry : toWrite.entrySet()) { String recordPath = entry.getKey(); String recordPathTemp = recordPath + "." + now() + TMP_MARK; - BufferedWriter writer = getWriter(recordPathTemp); + BufferedWriter writer = getBufferedWriter(recordPathTemp); + boolean recordWrittenSuccessfully = true; Review Comment: Initialize the value to false and then only set it to true when the write completes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5153: YARN-11381. Fix hadoop-yarn-common module Java Doc Errors.
slfan1989 commented on PR #5153: URL: https://github.com/apache/hadoop/pull/5153#issuecomment-1322823190 @ayushtkn Can you help review this PR? Thank you very much! The issue of checkstyle is not caused by this pr, this issue also existed before. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
slfan1989 commented on PR #5152: URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1322821848 @ayushtkn Can you help to review this PR again? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5155: HDFS-16851: RBF: Add a utility to dump the StateStore.
hadoop-yetus commented on PR #5155: URL: https://github.com/apache/hadoop/pull/5155#issuecomment-1322720740 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 17s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 59s | | trunk passed | | +1 :green_heart: | compile | 0m 58s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 52s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 0m 39s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 35s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 39m 57s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5155/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 148m 9s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterAdminCLI | | Subsystem | Report/Notes | |--:|:-| | D
[GitHub] [hadoop] omalley commented on a diff in pull request #5155: HDFS-16851: RBF: Add a utility to dump the StateStore.
omalley commented on code in PR #5155: URL: https://github.com/apache/hadoop/pull/5155#discussion_r1028560332 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java: ## @@ -97,6 +104,7 @@ public class RouterAdmin extends Configured implements Tool { private static final Logger LOG = LoggerFactory.getLogger(RouterAdmin.class); + private static final String DUMP_COMMAND = "-dump"; Review Comment: How about -dumpState? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider.
hadoop-yetus commented on PR #5142: URL: https://github.com/apache/hadoop/pull/5142#issuecomment-1322676213 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 40s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 23m 21s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 1m 45s | [/branch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/branch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs-project in trunk failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 52s | [/branch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/branch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-hdfs-project in trunk failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | checkstyle | 1m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 17s | | trunk passed | | +1 :green_heart: | javadoc | 3m 27s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 4m 8s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 3m 24s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt) | hadoop-hdfs-client in trunk failed. | | -1 :x: | spotbugs | 0m 47s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | spotbugs | 0m 48s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in trunk failed. | | +1 :green_heart: | shadedclient | 20m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 32s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | mvninstall | 0m 30s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt) | hadoop-hdfs-client in the patch failed. | | -1 :x: | mvninstall | 0m 31s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch failed. | | -1 :x: | compile | 0m 32s | [/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs-project in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 32s | [/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5142/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
[GitHub] [hadoop] hadoop-yetus commented on pull request #4949: YARN-8262. get_executable in container-executor should provide meaningful error codes
hadoop-yetus commented on PR #4949: URL: https://github.com/apache/hadoop/pull/4949#issuecomment-1322675838 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 24s | | trunk passed | | +1 :green_heart: | compile | 1m 34s | | trunk passed | | +1 :green_heart: | checkstyle | 0m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 55s | | trunk passed | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed | | +1 :green_heart: | spotbugs | 1m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 44s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 6s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed | | +1 :green_heart: | cc | 1m 19s | | the patch passed | | +1 :green_heart: | golang | 1m 19s | | the patch passed | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 30s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 42s | | the patch passed | | +1 :green_heart: | javadoc | 0m 34s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 23m 48s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 126m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4949/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4949 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc golang | | uname | Linux eada834c321b 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5067667111fbd7263ad000d43895e3ff1c6513b6 | | Default Java | Red Hat, Inc.-1.8.0_352-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4949/5/testReport/ | | Max. process+thread count | 601 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4949/5/console | | versions | git=2.9.5 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
hadoop-yetus commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1322665295 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 1s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 9s | | trunk passed | | +1 :green_heart: | compile | 0m 59s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 48s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 3s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 35s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 79m 59s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +0 :ok: | asflicense | 0m 40s | | ASF License check generated no output? | | | | 178m 31s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefresh | | | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup | | | hadoop.hdfs.server.federation.router.TestRouterFederationRenameInKerberosEnv | | | hadoop.hdfs.server.federation.router.TestRouterNetworkTopologyServlet | | | hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries | | | hadoop.fs.contract.router.TestRouterHDFSContractSetTimesSecure | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | | hadoop.fs.contract.router.web.TestRouterWebHDFSContractRootDirectory | | | hadoop.hdfs.server.federation.router.TestRouterAllResolver | | | hadoop.hdfs.server.federation.router.TestRouterTrash | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5145 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 3afaa30ca2cf 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5155: HDFS-16851: RBF: Add a utility to dump the StateStore.
ayushtkn commented on code in PR #5155: URL: https://github.com/apache/hadoop/pull/5155#discussion_r1028506972 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java: ## @@ -97,6 +104,7 @@ public class RouterAdmin extends Configured implements Tool { private static final Logger LOG = LoggerFactory.getLogger(RouterAdmin.class); + private static final String DUMP_COMMAND = "-dump"; Review Comment: Just dump isn't very indicative, dumpStateStore? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider.
simbadzina commented on code in PR #5142: URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028476373 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,13 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (conf.getBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, Review Comment: If someone passes in null, then the client will not echo back alignment state to the router, which it turn makes the router always forward these calls to the active namenode. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider.
simbadzina commented on code in PR #5142: URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028474832 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java: ## @@ -439,4 +440,60 @@ public void testRouterMsync() throws Exception { assertEquals("Four calls should be sent to active", 4, rpcCountForActive); } + + @Test + public void testSingleRead() throws Exception { +List namenodes = routerContext +.getRouter().getNamenodeResolver() +.getNamenodesForNameserviceId(cluster.getNameservices().get(0), true); +assertEquals("First namenode should be observer", namenodes.get(0).getState(), +FederationNamenodeServiceState.OBSERVER); +Path path = new Path("/"); + +long rpcCountForActive; +long rpcCountForObserver; + +// Send read request +fileSystem.listFiles(path, false); +fileSystem.close(); + +rpcCountForActive = routerContext.getRouter().getRpcServer() +.getRPCMetrics().getActiveProxyOps(); +// getListingCall sent to active. +assertEquals("Only one call should be sent to active", 1, rpcCountForActive); + +rpcCountForObserver = routerContext.getRouter().getRpcServer() +.getRPCMetrics().getObserverProxyOps(); +// getList call should be sent to observer +assertEquals("No calls should be sent to observer", 0, rpcCountForObserver); + } + + @Test + public void testSingleReadUsingObserverReadProxyProvider() throws Exception { +fileSystem.close(); +fileSystem = routerContext.getFileSystemWithObserverReadProxyProvider(); Review Comment: I agree. I'm now moving the code into the individual tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
hadoop-yetus commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1322583721 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 49s | | trunk passed | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 50s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 53s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 49s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 24s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 40s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 41m 4s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 153m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5145 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b921b7374645 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 65a8b978442964edd6176be0144d9f1ca49ccc96 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/3/testReport/ | | Max. process+thread count | 2607 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generate
[GitHub] [hadoop] hadoop-yetus commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
hadoop-yetus commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1322571575 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 10s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 21s | | trunk passed | | +1 :green_heart: | compile | 0m 56s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 39s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 33s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 40m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 151m 44s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5145 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 769098eaa7da 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 598979b4d92a04ee2ded87efcc6b2c555cf76455 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5145/2/testReport/ | | Max. process+thread count | 2599 (vs. ulimit of 5500) | | modules | C: hado
[GitHub] [hadoop] omalley opened a new pull request, #5155: HDFS-16851: RBF: Add a utility to dump the StateStore.
omalley opened a new pull request, #5155: URL: https://github.com/apache/hadoop/pull/5155 ### Description of PR Adds a utility to dump the RBF StateStore. ### How was this patch tested? It was tested manually against our RBF cluster -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
simbadzina commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028419083 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -88,6 +88,15 @@ protected abstract BufferedReader getReader( protected abstract BufferedWriter getWriter( String path); + /** + * Convenience method to allow to mocking of protected + * {@link #getWriter(String)} + */ + @VisibleForTesting + public BufferedWriter getBufferedWriter(String path) { Review Comment: The annotation is to state that the member is only public for tests. So non-test use-cases outside of the class should throw an error. https://github.com/apache/hadoop/blob/8f971b0e5413b491a2c7043bd25b046777e07395/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/VisibleForTesting.java#L27-L38 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
simbadzina commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028394821 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -366,7 +375,7 @@ public boolean putAll( } } // Commit - if (!rename(recordPathTemp, recordPath)) { + if (success && !rename(recordPathTemp, recordPath)) { Review Comment: Good catch @mkuchenbecker, thanks. I've added a new variable. I prefer adding a new variable versus breaking out of the loop. That way we only skip renaming files with erroneous writes, and ordering of records isn't considered. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider
mkuchenbecker commented on code in PR #5142: URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028371062 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java: ## @@ -122,7 +123,9 @@ public void startUpCluster(int numberOfObserver, Configuration confOverrides) th cluster.waitActiveNamespaces(); routerContext = cluster.getRandomRouter(); -fileSystem = routerContext.getFileSystemWithObserverReadsEnabled(); +Configuration confToEnableObserverRead = new Configuration(); + confToEnableObserverRead.setBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, true); +fileSystem = routerContext.getFileSystem(confToEnableObserverRead); Review Comment: We are losing coverage on `getFileSystemWithObserverReadsEnabled` with this change; we should likely be testing both as they are both valid use-cases whether you want to msync or not. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
mkuchenbecker commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028385376 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java: ## @@ -234,6 +234,25 @@ public void testInsert( assertEquals(11, records2.size()); } + public void testInsertWithErrorDuringWrite( Review Comment: To test my[ above comment](https://github.com/apache/hadoop/pull/5145/files#r1028382408), you would need two iterations of the loop where the first is a failure and the second is a success. The test passes despite the issue because there is only a single write. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xkrogen merged pull request #4201: HDFS-16547. [SBN read] Namenode in safe mode should not be transfered to observer state
xkrogen merged PR #4201: URL: https://github.com/apache/hadoop/pull/4201 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
mkuchenbecker commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028382408 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -366,7 +375,7 @@ public boolean putAll( } } // Commit - if (!rename(recordPathTemp, recordPath)) { + if (success && !rename(recordPathTemp, recordPath)) { Review Comment: This logic doesn't seem right.`success` is scoped to the function and this rename is within the for loop. As soon as any write fails we will never commit any changes; if that'd the case we should likely just `break`. So either we want to not commit any changes after the first failure, in which case we should break or we need a different variable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xkrogen commented on pull request #4201: HDFS-16547. [SBN read] Namenode in safe mode should not be transfered to observer state
xkrogen commented on PR #4201: URL: https://github.com/apache/hadoop/pull/4201#issuecomment-1322464405 `TestLeaseRecovery2` indeed seems to be broken; I confirmed that the behavior is the same before/after applying this PR. Merging to trunk. Thanks for the contribution @tomscut ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
mkuchenbecker commented on code in PR #5145: URL: https://github.com/apache/hadoop/pull/5145#discussion_r1028378298 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileBaseImpl.java: ## @@ -88,6 +88,15 @@ protected abstract BufferedReader getReader( protected abstract BufferedWriter getWriter( String path); + /** + * Convenience method to allow to mocking of protected + * {@link #getWriter(String)} + */ + @VisibleForTesting + public BufferedWriter getBufferedWriter(String path) { Review Comment: Private? Protected? There is no reason to have `VisibleForTesting` if public. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mkuchenbecker commented on a diff in pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider
mkuchenbecker commented on code in PR #5142: URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028372645 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java: ## @@ -439,4 +440,60 @@ public void testRouterMsync() throws Exception { assertEquals("Four calls should be sent to active", 4, rpcCountForActive); } + + @Test + public void testSingleRead() throws Exception { +List namenodes = routerContext +.getRouter().getNamenodeResolver() +.getNamenodesForNameserviceId(cluster.getNameservices().get(0), true); +assertEquals("First namenode should be observer", namenodes.get(0).getState(), +FederationNamenodeServiceState.OBSERVER); +Path path = new Path("/"); + +long rpcCountForActive; +long rpcCountForObserver; + +// Send read request +fileSystem.listFiles(path, false); +fileSystem.close(); + +rpcCountForActive = routerContext.getRouter().getRpcServer() +.getRPCMetrics().getActiveProxyOps(); +// getListingCall sent to active. +assertEquals("Only one call should be sent to active", 1, rpcCountForActive); + +rpcCountForObserver = routerContext.getRouter().getRpcServer() +.getRPCMetrics().getObserverProxyOps(); +// getList call should be sent to observer +assertEquals("No calls should be sent to observer", 0, rpcCountForObserver); + } + + @Test + public void testSingleReadUsingObserverReadProxyProvider() throws Exception { +fileSystem.close(); +fileSystem = routerContext.getFileSystemWithObserverReadProxyProvider(); Review Comment: This seems wrong to special-case in this way. Either manage it during setup or set it for every function, but I'd advise against mixing the two. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java: ## @@ -122,7 +123,9 @@ public void startUpCluster(int numberOfObserver, Configuration confOverrides) th cluster.waitActiveNamespaces(); routerContext = cluster.getRandomRouter(); -fileSystem = routerContext.getFileSystemWithObserverReadsEnabled(); +Configuration confToEnableObserverRead = new Configuration(); + confToEnableObserverRead.setBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, true); +fileSystem = routerContext.getFileSystem(confToEnableObserverRead); Review Comment: We are losing coverage on `getFileSystemWithObserverReadsEnabled` with this change; we should likely be testing both as they are both valid use-cases whether you want to msync or not. ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,13 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (conf.getBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE, Review Comment: What was the original behaviour where someone passed in `null` to this function? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mkuchenbecker commented on pull request #5142: HDFS-16845: Adds configuration flag to allow clients to use router observer reads without using the ObserverReadProxyProvider.
mkuchenbecker commented on PR #5142: URL: https://github.com/apache/hadoop/pull/5142#issuecomment-1322457075 Mostly mechanical review, with some questions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on pull request #5145: HDFS-16847: Prevents StateStoreFileSystemImpl from committing tmp file after encountering an IOException.
simbadzina commented on PR #5145: URL: https://github.com/apache/hadoop/pull/5145#issuecomment-1322414231 I've added a unit test. Without my patch. We write a zero byte file and end up with the following error when trying to read the state store. > Caused by: java.io.IOException: Cannot read /hdfs-federation/MembershipState/randomString--459989902-randomString--826693283-randomString--606850105 for record MembershipState at org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileBaseImpl.getRecord(StateStoreFileBaseImpl.java:290) at org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileBaseImpl.get(StateStoreFileBaseImpl.java:222) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16761) KMSClientProvider does not work with client using ticket logged in externally
[ https://issues.apache.org/jira/browse/HADOOP-16761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17636783#comment-17636783 ] ASF GitHub Bot commented on HADOOP-16761: - hadoop-yetus commented on PR #1769: URL: https://github.com/apache/hadoop/pull/1769#issuecomment-132241 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 2s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 59s | | trunk passed | | +1 :green_heart: | compile | 23m 28s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 21m 5s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 1s | | trunk passed | | +1 :green_heart: | javadoc | 3m 13s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 27s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 44s | | the patch passed | | +1 :green_heart: | compile | 25m 15s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 25m 15s | | the patch passed | | +1 :green_heart: | compile | 21m 38s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 4s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1769/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 70 unchanged - 0 fixed = 73 total (was 70) | | +1 :green_heart: | mvnsite | 3m 46s | | the patch passed | | +1 :green_heart: | javadoc | 2m 58s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 19s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 52s | | hadoop-common in the patch passed. | | -1 :x: | unit | 421m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1769/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 43s | | The patch does not generate ASF License warnings. | | | | 671m 43s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerService | | | hadoop.hdfs.TestLeaseRecovery2 | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1769/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1769 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 97359b921443 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ad788824d4041dfe5c496a16b270c55e8172801c | | Default Java | Private Build-1.8.0_342-8u342
[GitHub] [hadoop] hadoop-yetus commented on pull request #1769: HADOOP-16761. KMSClientProvider does not work with client using ticke…
hadoop-yetus commented on PR #1769: URL: https://github.com/apache/hadoop/pull/1769#issuecomment-132241 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 2s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 59s | | trunk passed | | +1 :green_heart: | compile | 23m 28s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 21m 5s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 1s | | trunk passed | | +1 :green_heart: | javadoc | 3m 13s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 27s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 44s | | the patch passed | | +1 :green_heart: | compile | 25m 15s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 25m 15s | | the patch passed | | +1 :green_heart: | compile | 21m 38s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 4s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1769/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 70 unchanged - 0 fixed = 73 total (was 70) | | +1 :green_heart: | mvnsite | 3m 46s | | the patch passed | | +1 :green_heart: | javadoc | 2m 58s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 19s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 7m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 52s | | hadoop-common in the patch passed. | | -1 :x: | unit | 421m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1769/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 43s | | The patch does not generate ASF License warnings. | | | | 671m 43s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerService | | | hadoop.hdfs.TestLeaseRecovery2 | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1769/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1769 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 97359b921443 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ad788824d4041dfe5c496a16b270c55e8172801c | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop
[GitHub] [hadoop] xkrogen commented on pull request #4209: HDFS-16550. Improper cache-size for journal node may cause cluster crash
xkrogen commented on PR #4209: URL: https://github.com/apache/hadoop/pull/4209#issuecomment-1322361104 I am -1 on the PR as-is. We have publicly exposed the current config `dfs.journalnode.edit-cache-size.bytes`; we can't just rename it and change the behavior now. I also think there is a lot of value in being able to configure the cache size exactly, rather than as a fraction, but I do recognize the value in using a ratio as a helpful default (one less knob to tune). I would propose: * _Add_ (not replace) a new config `dfs.journalnode.edit-cache-size.fraction` (or `.ratio`? but either way I think we should maintain the `edit-cache-size` prefix) * If `edit-cache-size.bytes` is set, use that value. Otherwise, use the value of `edit-cache-size.fraction * Runtime#maxMemory()`, which has a default value set. * I would suggest 0.5 rather than 0.3 for the default value of `fraction` but am open to discussion there. This still does change the default behavior slightly, since before you would get a 1GB cache and now you get `-Xmx * 0.5`, but there is an easy way to preserve the old behavior and if you've explicitly configured the cache size (which you probably did, if you're using the feature) then there is no change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xkrogen commented on a diff in pull request #4744: HDFS-16689. Standby NameNode crashes when transitioning to Active with in-progress tailer
xkrogen commented on code in PR #4744: URL: https://github.com/apache/hadoop/pull/4744#discussion_r1028281649 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java: ## @@ -174,6 +175,11 @@ protected FSImage(Configuration conf, archivalManager = new NNStorageRetentionManager(conf, storage, editLog); FSImageFormatProtobuf.initParallelLoad(conf); } + + @VisibleForTesting + void setEditLog(FSEditLog editLog) { +this.editLog = editLog; + } Review Comment: Can we use `DFSTestUtil.setEditLogForTesting()` for this? ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java: ## @@ -174,6 +175,11 @@ protected FSImage(Configuration conf, archivalManager = new NNStorageRetentionManager(conf, storage, editLog); FSImageFormatProtobuf.initParallelLoad(conf); } + + @VisibleForTesting + void setEditLog(FSEditLog editLog) { +this.editLog = editLog; + } Review Comment: Can we use `DFSTestUtil.setEditLogForTesting()` for this instead of adding a new method? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xkrogen merged pull request #5099: HDFS-16832. [SBN READ] Fix NPE when check the block location of empty…
xkrogen merged PR #5099: URL: https://github.com/apache/hadoop/pull/5099 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xkrogen commented on pull request #5099: HDFS-16832. [SBN READ] Fix NPE when check the block location of empty…
xkrogen commented on PR #5099: URL: https://github.com/apache/hadoop/pull/5099#issuecomment-1322317231 `TestLeaseRecovery2` has been flaky; I am not worried about the failure. Merging to trunk. Thank you for the contribution @zhengchenyu ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #4655: YARN-11216. Avoid unnecessary reconstruction of ConfigurationProperties
szilard-nemeth commented on code in PR #4655: URL: https://github.com/apache/hadoop/pull/4655#discussion_r1028179146 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java: ## @@ -242,6 +244,10 @@ public class Configuration implements Iterable>, private boolean restrictSystemProps = restrictSystemPropsDefault; private boolean allowNullValueProperties = false; + private BiConsumer propAddListener; Review Comment: Can we use 'properties' as a name for fields, setters and the class (PropertiesWithListener) without the abbreviation? I think it's not too long and abbreviating is not really necessary in this case. ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java: ## @@ -4060,4 +4077,26 @@ private void putIntoUpdatingResource(String key, String[] value) { } localUR.put(key, value); } + private class PropWithListener extends Properties { + +private final Configuration configuration; + +public PropWithListener(Configuration configuration) { + this.configuration = configuration; +} +@Override Review Comment: Nit: Add newline between constructor and setProperty method. ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java: ## @@ -871,6 +877,15 @@ public Configuration(Configuration other) { setQuietMode(other.getQuietMode()); } + protected synchronized void setPropListeners( + BiConsumer propAddListener, + Consumer propRemoveListener + ) { +this.properties = null; Review Comment: Here, you could accept null values for the consumers, at least there's no prevention for them to be null. In getProps, the PropWithListener is created if any of the listeners are not null (so the other one is allowed to be null). Calling setProperty on PropWithListener is not checking if those fields are null, which is dangerous. Either prevent them to be null in the constructor (fail fast) or do a null check in setProperty. Could you please also add a testcase to cover the null scenarios? ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ConfigurationProperties.java: ## @@ -55,6 +58,17 @@ public ConfigurationProperties(Map props) { storePropertiesInPrefixNodes(props); } + /** + * A constructor defined in order to conform to the type used by + * {@code Configuration}. It must only be called by String keys and values. Review Comment: ```suggestion * {@code Configuration}. It must only be called with String keys and values. ``` ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java: ## @@ -18,15 +18,27 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; -import org.apache.hadoop.classification.VisibleForTesting; Review Comment: Please exclude formatting (i.e. organize imports) from your commit and only add required changes in imports. ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ConfigurationProperties.java: ## @@ -158,39 +208,49 @@ private void copyProperties( */ private void storePropertiesInPrefixNodes(Map props) { for (Map.Entry prop : props.entrySet()) { - List propertyKeyParts = splitPropertyByDelimiter(prop.getKey()); - if (!propertyKeyParts.isEmpty()) { -PrefixNode node = findOrCreatePrefixNode(nodes, -propertyKeyParts.iterator()); + PrefixNode node = getNode(prop.getKey()); + if (node != null) { node.getValues().put(prop.getKey(), prop.getValue()); - } else { -LOG.warn("Empty configuration property, skipping..."); } } } + /** + * Finds the node that matches the whole key or create it, if it does not exist. + * @param name name of the property + * @return the found or created node, if the name is empty, than return with null + */ Review Comment: ```suggestion * Finds the node that matches the whole key or create it if it does not exist. * @param name name of the property * @return the found or newly created node, otherwise return null if the name is empty */ ``` ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ConfigurationProperties.java: ## @@ -94,6 +108,42 @@ public Map getPropertiesWithPrefix( return properties; } + /** + * Update or create value in the nodes. + *
[jira] [Commented] (HADOOP-18533) RPC Client performance improvement
[ https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17636744#comment-17636744 ] ASF GitHub Bot commented on HADOOP-18533: - hadoop-yetus commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1322263878 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 35s | | trunk passed | | +1 :green_heart: | compile | 23m 34s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 21m 1s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 52s | | trunk passed | | -1 :x: | javadoc | 1m 48s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 22m 40s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 22m 40s | | the patch passed | | +1 :green_heart: | compile | 21m 5s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 36s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 2 new + 185 unchanged - 1 fixed = 187 total (was 186) | | +1 :green_heart: | mvnsite | 1m 53s | | the patch passed | | -1 :x: | javadoc | 1m 16s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 44s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 20s | | The patch does not generate ASF License warnings. | | | | 213m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5151 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux bb2eeff8f248 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d4d033ddbaf44b451ad1f0d31
[GitHub] [hadoop] hadoop-yetus commented on pull request #5151: HADOOP-18533. RPC Client performance improvement
hadoop-yetus commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1322263878 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 35s | | trunk passed | | +1 :green_heart: | compile | 23m 34s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 21m 1s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 52s | | trunk passed | | -1 :x: | javadoc | 1m 48s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 22m 40s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 22m 40s | | the patch passed | | +1 :green_heart: | compile | 21m 5s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 36s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 2 new + 185 unchanged - 1 fixed = 187 total (was 186) | | +1 :green_heart: | mvnsite | 1m 53s | | the patch passed | | -1 :x: | javadoc | 1m 16s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 44s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 20s | | The patch does not generate ASF License warnings. | | | | 213m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5151/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5151 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux bb2eeff8f248 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d4d033ddbaf44b451ad1f0d317747b74eb876b55 | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~2
[GitHub] [hadoop] hadoop-yetus commented on pull request #5119: YARN-5607. Document TestContainerResourceUsage#waitForContainerCompletion
hadoop-yetus commented on PR #5119: URL: https://github.com/apache/hadoop/pull/5119#issuecomment-1322241513 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 60 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 1s | | trunk passed | | +1 :green_heart: | compile | 25m 11s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 21m 44s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 17s | | trunk passed | | +1 :green_heart: | javadoc | 2m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 36s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 29s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 50s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 9s | | the patch passed | | +1 :green_heart: | compile | 24m 32s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 24m 32s | [/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/4/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 2815 unchanged - 0 fixed = 2816 total (was 2815) | | +1 :green_heart: | compile | 21m 38s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 21m 38s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/4/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 1 new + 2611 unchanged - 0 fixed = 2612 total (was 2611) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 53s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 325 new + 774 unchanged - 3 fixed = 1099 total (was 777) | | +1 :green_heart: | mvnsite | 3m 15s | | the patch passed | | +1 :green_heart: | javadoc | 2m 45s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 33s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 56s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 107m 31s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5119/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 27m 34s | | hadoop-yarn-client in the patch passed. | | -1 :x: | unit | 9m 17s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranc
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #4949: YARN-8262. get_executable in container-executor should provide meaningful error codes
szilard-nemeth commented on code in PR #4949: URL: https://github.com/apache/hadoop/pull/4949#discussion_r1028166041 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java: ## @@ -175,8 +175,12 @@ public enum ExitCode { COULD_NOT_CREATE_WORK_DIRECTORIES(35), COULD_NOT_CREATE_APP_LOG_DIRECTORIES(36), COULD_NOT_CREATE_TMP_DIRECTORIES(37), -ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38); - +ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38), +CANT_GET_EXECUTABLE_NAME_FROM_READLINK(80), +TOO_LONG_EXECUTOR_PATH(81), +CANT_GET_EXECUTABLE_NAME_FROM_KERNEL(82), Review Comment: Replace CANT with CANNOT ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java: ## @@ -175,8 +175,12 @@ public enum ExitCode { COULD_NOT_CREATE_WORK_DIRECTORIES(35), COULD_NOT_CREATE_APP_LOG_DIRECTORIES(36), COULD_NOT_CREATE_TMP_DIRECTORIES(37), -ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38); - +ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38), +CANT_GET_EXECUTABLE_NAME_FROM_READLINK(80), +TOO_LONG_EXECUTOR_PATH(81), +CANT_GET_EXECUTABLE_NAME_FROM_KERNEL(82), +CANT_GET_EXECUTABLE_NAME_FROM_PID(83), +WRONGPATH_OF_EXECUTABLE(84); Review Comment: Replace WRONGPATH with WRONG_PATH ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c: ## @@ -337,6 +337,16 @@ const char *get_error_message(const int error_code) { return "runC run failed"; case ERROR_RUNC_REAP_LAYER_MOUNTS_FAILED: return "runC reap layer mounts failed"; + case CANT_GET_EXECUTABLE_NAME_FROM_READLINK: +return "Can't get executable name from readlink"; + case TOO_LONG_EXECUTOR_PATH: +return "Too long executor path"; + case CANT_GET_EXECUTABLE_NAME_FROM_KERNEL: +return "Can't get executable name from kernel"; Review Comment: Replace Can't with Cannot ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c: ## @@ -337,6 +337,16 @@ const char *get_error_message(const int error_code) { return "runC run failed"; case ERROR_RUNC_REAP_LAYER_MOUNTS_FAILED: return "runC reap layer mounts failed"; + case CANT_GET_EXECUTABLE_NAME_FROM_READLINK: +return "Can't get executable name from readlink"; + case TOO_LONG_EXECUTOR_PATH: +return "Too long executor path"; + case CANT_GET_EXECUTABLE_NAME_FROM_KERNEL: +return "Can't get executable name from kernel"; + case CANT_GET_EXECUTABLE_NAME_FROM_PID: +return "Can't get executable name from pid"; + case WRONGPATH_OF_EXECUTABLE: +return "Wrongpath of executable"; Review Comment: Replace "Wrongpath" with "Wrong path" ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c: ## @@ -337,6 +337,16 @@ const char *get_error_message(const int error_code) { return "runC run failed"; case ERROR_RUNC_REAP_LAYER_MOUNTS_FAILED: return "runC reap layer mounts failed"; + case CANT_GET_EXECUTABLE_NAME_FROM_READLINK: +return "Can't get executable name from readlink"; + case TOO_LONG_EXECUTOR_PATH: +return "Too long executor path"; + case CANT_GET_EXECUTABLE_NAME_FROM_KERNEL: +return "Can't get executable name from kernel"; + case CANT_GET_EXECUTABLE_NAME_FROM_PID: +return "Can't get executable name from pid"; Review Comment: Replace Can't with Cannot ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java: ## @@ -175,8 +175,12 @@ public enum ExitCode { COULD_NOT_CREATE_WORK_DIRECTORIES(35), COULD_NOT_CREATE_APP_LOG_DIRECTORIES(36), COULD_NOT_CREATE_TMP_DIRECTORIES(37), -ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38); - +ERROR_CREATE_CONTAINER_DIRECTORIES_ARGUMENTS(38), +CANT_GET_EXECUTABLE_NAME_FROM_READLINK(80), +TOO_LONG_EXECUTOR_PATH(81), +CANT_GET_EXECUTABLE_NAME_FROM_KERNEL(82), +CANT_GET_EXECUTABLE_NAME_FROM_PID(83), Review Comment: Replace CANT with CANNOT ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h: ## @@ -104,7 +104,12 @@ enum errorcodes { ERROR_RUNC_SETUP_FAILED = 76, ERROR_RUNC_RUN_FAILED = 77, ERROR_RUNC_REAP_LAYER_MOUNTS_FAILED = 78, - ERROR_DOCKER_CONTAINER_EXEC_FAILED = 79 + ERROR_DOCKER_CONTAINER_E
[GitHub] [hadoop] slfan1989 commented on pull request #5146: YARN-11373. [Federation] Support refreshQueues、refreshNodes API's for Federation.
slfan1989 commented on PR #5146: URL: https://github.com/apache/hadoop/pull/5146#issuecomment-1322185823 @goiri Can you help review this PR? Thank you very much! The java-doc error is not caused by this pr code, I submitted 2 prs for repair. YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.(#5152) YARN-11381. Fix hadoop-yarn-common module Java Doc Errors. (#5153) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
hadoop-yetus commented on PR #5152: URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1322078791 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 35s | | trunk passed | | +1 :green_heart: | compile | 0m 59s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 0m 51s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 6s | | trunk passed | | -1 :x: | javadoc | 0m 56s | [/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/5/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-api in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 51s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 0m 51s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 21s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The patch generated 0 new + 133 unchanged - 27 fixed = 133 total (was 160) | | +1 :green_heart: | mvnsite | 0m 43s | | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 0 unchanged - 107 fixed = 0 total (was 107) | | +1 :green_heart: | javadoc | 0m 31s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 unchanged - 146 fixed = 0 total (was 146) | | +1 :green_heart: | spotbugs | 1m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 9s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 98m 12s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5152 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 6c865aec7045 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b39efd703b2a93fe0bc113837df22668c455b2eb | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Test Results | https
[GitHub] [hadoop] hadoop-yetus commented on pull request #5153: YARN-11381. Fix hadoop-yarn-common module Java Doc Errors.
hadoop-yetus commented on PR #5153: URL: https://github.com/apache/hadoop/pull/5153#issuecomment-1321975381 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 59s | | trunk passed | | +1 :green_heart: | compile | 1m 4s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 1s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 5s | | trunk passed | | +1 :green_heart: | javadoc | 1m 15s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | | the patch passed | | +1 :green_heart: | compile | 0m 52s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 52s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 33s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5153/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 2 new + 576 unchanged - 23 fixed = 578 total (was 599) | | +1 :green_heart: | mvnsite | 0m 46s | | the patch passed | | +1 :green_heart: | javadoc | 0m 41s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 0 new + 0 unchanged - 99 fixed = 0 total (was 99) | | +1 :green_heart: | javadoc | 0m 39s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 0 new + 0 unchanged - 268 fixed = 0 total (was 268) | | +1 :green_heart: | spotbugs | 1m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 30s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 111m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5153/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5153 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4f7abc6f676f 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c4b08cb2428e3d8c61a941caa4357892639600ae | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/jo
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
ayushtkn commented on code in PR #5152: URL: https://github.com/apache/hadoop/pull/5152#discussion_r1027903125 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java: ## @@ -1,4 +1,4 @@ -/* +/** Review Comment: Ok -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
slfan1989 commented on code in PR #5152: URL: https://github.com/apache/hadoop/pull/5152#discussion_r1027897718 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java: ## @@ -1,4 +1,4 @@ -/* +/** Review Comment: I rolled back the code, but found that it cannot be compiled by java-doc, we still need to add * to line 1 of package-info.java. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5131: YARN-11350. [Federation] Router Support DelegationToken With ZK.
hadoop-yetus commented on PR #5131: URL: https://github.com/apache/hadoop/pull/5131#issuecomment-1321858433 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 31s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 2s | | trunk passed | | +1 :green_heart: | compile | 4m 7s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 26s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 48s | | trunk passed | | +1 :green_heart: | javadoc | 2m 40s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 4s | | the patch passed | | +1 :green_heart: | compile | 4m 21s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 4m 21s | | the patch passed | | +1 :green_heart: | compile | 3m 32s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 3m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 11s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 0 new + 47 unchanged - 1 fixed = 47 total (was 48) | | +1 :green_heart: | javadoc | 0m 51s | | hadoop-yarn-server-resourcemanager in the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 0m 31s | | hadoop-yarn-server-router in the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 0m 36s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 0 new + 47 unchanged - 1 fixed = 47 total (was 48) | | +1 :green_heart: | javadoc | 0m 46s | | hadoop-yarn-server-resourcemanager in the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | javadoc | 0m 25s | | hadoop-yarn-server-router in the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | spotbugs | 4m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 13s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 99m 21s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 0m 38s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 234m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5131/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5131 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle c
[jira] [Commented] (HADOOP-18523) Allow to retrieve an object from MinIO (S3 API) with a very restrictive policy
[ https://issues.apache.org/jira/browse/HADOOP-18523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17636624#comment-17636624 ] Sébastien Burton commented on HADOOP-18523: --- Hello [~ste...@apache.org], I cannot take the time for this in the near future, sorry about that. :( You already helped me a lot (I'm very happy you answered, again thank you! :)) and I don't assume you'd do more here. As this doesn't affect many people, {*}you could indeed close this as a "wontfix"{*}. I'd keep this in mind in case I can investigate it in the future! ;) > Allow to retrieve an object from MinIO (S3 API) with a very restrictive policy > -- > > Key: HADOOP-18523 > URL: https://issues.apache.org/jira/browse/HADOOP-18523 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Sébastien Burton >Priority: Major > > Hello, > We're using Spark > ({{{}"org.apache.spark:spark-[catalyst|core|sql]_2.12:3.2.2"{}}}) and Hadoop > ({{{}"org.apache.hadoop:hadoop-common:3.3.3"{}}}) and want to retrieve an > object stored in a MinIO bucket (MinIO implements the S3 API). Spark relies > on Hadoop for this operation. > The MinIO bucket (that we don't manage) is configured with a very restrictive > policy that only allows us to retrieve the object (and nothing else). > Something like: > {code:java} > { > "statement": [ > { > "effect": "Allow", > "action": [ "s3:GetObject" ], > "resource": [ "arn:aws:s3:::minio-bucket/object" ] > } > ] > }{code} > And using the AWS CLI, we can well retrieve the object. > When we try with Spark's {{{}DataFrameReader{}}}, we receive an HTTP 403 > response (access denied) from MinIO: > {code:java} > java.nio.file.AccessDeniedException: s3a://minio-bucket/object: getFileStatus > on s3a://minio-bucket/object: > com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied. (Service: > Amazon S3; Status Code: 403; Error Code: AccessDenied; ... > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:255) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:175) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3858) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3688) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$isDirectory$35(S3AFileSystem.java:4724) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4722) > at > org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:54) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370) > at > org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274) > at > org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245) > at scala.Option.getOrElse(Option.scala:189) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245) > at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:571) > at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:481) > at > com.soprabanking.dxp.pure.bf.dataaccess.S3Storage.loadDataset(S3Storage.java:55) > at > com.soprabanking.dxp.pure.bf.business.step.DatasetLoader.lambda$doLoad$3(DatasetLoader.java:148) > at > reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125) > at > reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816) > at > reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151) > at > reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816) > at > reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249) > at > reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816) > at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:251) > at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:336) > at > reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398) > at reactor.core.publisher.MonoZip$ZipInner.onSubscribe(MonoZip.java:325) > at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55) > at reactor.core.p
[GitHub] [hadoop] ahmarsuhail closed pull request #5154: Hadoop 18073 sdk upgrade delete select mpu
ahmarsuhail closed pull request #5154: Hadoop 18073 sdk upgrade delete select mpu URL: https://github.com/apache/hadoop/pull/5154 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
hadoop-yetus commented on PR #5152: URL: https://github.com/apache/hadoop/pull/5152#issuecomment-1321831548 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 11s | | trunk passed | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 1s | | trunk passed | | -1 :x: | javadoc | 0m 55s | [/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/4/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-api in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 45s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 0m 45s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 22s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: The patch generated 0 new + 141 unchanged - 19 fixed = 141 total (was 160) | | +1 :green_heart: | mvnsite | 0m 42s | | the patch passed | | -1 :x: | javadoc | 0m 39s | [/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/4/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-api in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 0m 33s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 unchanged - 146 fixed = 0 total (was 146) | | +1 :green_heart: | spotbugs | 1m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 1s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 99m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5152/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5152 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 10f81b9059fe 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0cd27fa13de1fb35122e5c3845cff855b9989b5b | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/j
[GitHub] [hadoop] ahmarsuhail opened a new pull request, #5154: Hadoop 18073 sdk upgrade delete select mpu
ahmarsuhail opened a new pull request, #5154: URL: https://github.com/apache/hadoop/pull/5154 WIP -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5146: YARN-11373. [Federation] Support refreshQueues、refreshNodes API's for Federation.
hadoop-yetus commented on PR #5146: URL: https://github.com/apache/hadoop/pull/5146#issuecomment-1321787296 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 14s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 3s | | trunk passed | | +1 :green_heart: | compile | 9m 50s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 8m 52s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 2m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 37s | | trunk passed | | -1 :x: | javadoc | 1m 30s | [/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5146/3/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-api in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | -1 :x: | javadoc | 1m 28s | [/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5146/3/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 3m 33s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 9m 38s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | cc | 9m 38s | | the patch passed | | +1 :green_heart: | javac | 9m 38s | | the patch passed | | +1 :green_heart: | compile | 9m 12s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | cc | 9m 12s | | the patch passed | | +1 :green_heart: | javac | 9m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 52s | | hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 2 unchanged - 9 fixed = 2 total (was 11) | | +1 :green_heart: | mvnsite | 3m 14s | | the patch passed | | -1 :x: | javadoc | 1m 0s | [/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5146/3/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-api in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | -1 :x: | javadoc | 1m 6s | [/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5146/3/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-yarn-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 2m 50s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 53s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests
[jira] [Commented] (HADOOP-18533) RPC Client performance improvement
[ https://issues.apache.org/jira/browse/HADOOP-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17636570#comment-17636570 ] ASF GitHub Bot commented on HADOOP-18533: - huxinqiu commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321736995 > @huxinqiu Thank you very much for your contribution! > > We need to discuss something: > > 1. It seems that the benefit is to avoid declaring this variable `ResponseBuffer`, bringing the initialized 1024 byte. >Then we moved the original internal calculation code directly to the outside. > > > modified code > > ``` > int computedSize = connectionContextHeader.getSerializedSize(); > computedSize += CodedOutputStream.computeUInt32SizeNoTag(computedSize); > int messageSize = message.getSerializedSize(); > computedSize += messageSize; > computedSize += CodedOutputStream.computeUInt32SizeNoTag(messageSize); > byte[] dataLengthBuffer = new byte[4]; > dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF); > dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF); > dataLengthBuffer[2] = (byte)((computedSize >>> 8) & 0xFF); > dataLengthBuffer[3] = (byte)(computedSize & 0xFF); > ``` > > > The original calculation code is like this connectionContextHeader.writeDelimitedTo(buf) > > ``` > int serialized = this.getSerializedSize(); > int bufferSize = CodedOutputStream.computePreferredBufferSize(CodedOutputStream.computeRawVarint32Size(serialized) + serialized); > CodedOutputStream codedOutput = CodedOutputStream.newInstance(output, bufferSize); > codedOutput.writeRawVarint32(serialized); > this.writeTo(codedOutput); > codedOutput.flush(); > ``` > > > ResponseBuffer#setSize > > ``` > @Override > public int size() { > return count - FRAMING_BYTES; > } > void setSize(int size) { > buf[0] = (byte)((size >>> 24) & 0xFF); > buf[1] = (byte)((size >>> 16) & 0xFF); > buf[2] = (byte)((size >>> 8) & 0xFF); > buf[3] = (byte)((size >>> 0) & 0xFF); > } > ``` > > 2. code duplication >The following calculation logic appears 3 times > > ``` > this.dataLengthBuffer = new byte[4]; > dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF); > dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF); > dataLengthBuffer[2] = (byte)((computedSize >>> 8) & 0xFF); > dataLengthBuffer[3] = (byte)(computedSize & 0xFF); > this.header = header; > this.rpcRequest = rpcRequest; > ``` > > RpcProtobufRequestWithHeader#Constructor SaslRpcClient#sendSaslMessage Client#writeConnectionContext @slfan1989 1. Yes, IpcStreams#out is a BufferedOutputStream, which has a byte array inside it, and protobuf's CodedOutputStream also has a byte array cache inside to optimize writing, we don't need to aggregate dataLength, header and rpcRequest into a ResponseBuffer which is actually a byte array. The only extra performance cost in the RpcRequestSender thread is the serialization of protobuf, usually the request size is only a few hundred bytes, and the serialization will only cost tens of microseconds. So I think it is better to first calculate the request size, and then write the dataLength, header and rpcRequest to the BufferedOutputStream one by one, so as to avoid requesting a 1024-byte array for each request. 2.I'll fix these code duplication afterwards > RPC Client performance improvement > -- > > Key: HADOOP-18533 > URL: https://issues.apache.org/jira/browse/HADOOP-18533 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > Labels: pull-request-available > > The current implementation copies the rpcRequest and header to a > ByteArrayOutputStream in order to calculate the total length of the sent > request, and then writes it to the socket buffer. > But if the rpc engine is ProtobufRpcEngine2, we can pre-calculate the > request size, and then send the request directly to the socket buffer, > reducing a memory copy. And avoid allocating 1024 bytes of ResponseBuffer > each time a request is sent. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huxinqiu commented on pull request #5151: HADOOP-18533. RPC Client performance improvement
huxinqiu commented on PR #5151: URL: https://github.com/apache/hadoop/pull/5151#issuecomment-1321736995 > @huxinqiu Thank you very much for your contribution! > > We need to discuss something: > > 1. It seems that the benefit is to avoid declaring this variable `ResponseBuffer`, bringing the initialized 1024 byte. >Then we moved the original internal calculation code directly to the outside. > > > modified code > > ``` > int computedSize = connectionContextHeader.getSerializedSize(); > computedSize += CodedOutputStream.computeUInt32SizeNoTag(computedSize); > int messageSize = message.getSerializedSize(); > computedSize += messageSize; > computedSize += CodedOutputStream.computeUInt32SizeNoTag(messageSize); > byte[] dataLengthBuffer = new byte[4]; > dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF); > dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF); > dataLengthBuffer[2] = (byte)((computedSize >>> 8) & 0xFF); > dataLengthBuffer[3] = (byte)(computedSize & 0xFF); > ``` > > > The original calculation code is like this connectionContextHeader.writeDelimitedTo(buf) > > ``` > int serialized = this.getSerializedSize(); > int bufferSize = CodedOutputStream.computePreferredBufferSize(CodedOutputStream.computeRawVarint32Size(serialized) + serialized); > CodedOutputStream codedOutput = CodedOutputStream.newInstance(output, bufferSize); > codedOutput.writeRawVarint32(serialized); > this.writeTo(codedOutput); > codedOutput.flush(); > ``` > > > ResponseBuffer#setSize > > ``` > @Override > public int size() { > return count - FRAMING_BYTES; > } > void setSize(int size) { > buf[0] = (byte)((size >>> 24) & 0xFF); > buf[1] = (byte)((size >>> 16) & 0xFF); > buf[2] = (byte)((size >>> 8) & 0xFF); > buf[3] = (byte)((size >>> 0) & 0xFF); > } > ``` > > 2. code duplication >The following calculation logic appears 3 times > > ``` > this.dataLengthBuffer = new byte[4]; > dataLengthBuffer[0] = (byte)((computedSize >>> 24) & 0xFF); > dataLengthBuffer[1] = (byte)((computedSize >>> 16) & 0xFF); > dataLengthBuffer[2] = (byte)((computedSize >>> 8) & 0xFF); > dataLengthBuffer[3] = (byte)(computedSize & 0xFF); > this.header = header; > this.rpcRequest = rpcRequest; > ``` > > RpcProtobufRequestWithHeader#Constructor SaslRpcClient#sendSaslMessage Client#writeConnectionContext @slfan1989 1. Yes, IpcStreams#out is a BufferedOutputStream, which has a byte array inside it, and protobuf's CodedOutputStream also has a byte array cache inside to optimize writing, we don't need to aggregate dataLength, header and rpcRequest into a ResponseBuffer which is actually a byte array. The only extra performance cost in the RpcRequestSender thread is the serialization of protobuf, usually the request size is only a few hundred bytes, and the serialization will only cost tens of microseconds. So I think it is better to first calculate the request size, and then write the dataLength, header and rpcRequest to the BufferedOutputStream one by one, so as to avoid requesting a 1024-byte array for each request. 2.I'll fix these code duplication afterwards -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5153: YARN-11381. Fix hadoop-yarn-common module Java Doc Errors.
hadoop-yetus commented on PR #5153: URL: https://github.com/apache/hadoop/pull/5153#issuecomment-1321705500 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 42s | | trunk passed | | +1 :green_heart: | compile | 1m 7s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 59s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 55s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 5s | | trunk passed | | +1 :green_heart: | javadoc | 1m 13s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | | the patch passed | | +1 :green_heart: | compile | 0m 57s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 58s | | the patch passed | | +1 :green_heart: | compile | 0m 48s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 48s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 32s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5153/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 2 new + 144 unchanged - 21 fixed = 146 total (was 165) | | +1 :green_heart: | mvnsite | 0m 51s | | the patch passed | | -1 :x: | javadoc | 0m 50s | [/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5153/1/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 30 new + 69 unchanged - 30 fixed = 99 total (was 99) | | +1 :green_heart: | javadoc | 0m 47s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 0 new + 167 unchanged - 101 fixed = 167 total (was 268) | | +1 :green_heart: | spotbugs | 1m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 47s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 115m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5153/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5153 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 22023f315657 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0f6e45b56dcfcc2dd88c562cbb09702fa176
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
slfan1989 commented on code in PR #5152: URL: https://github.com/apache/hadoop/pull/5152#discussion_r1027713838 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java: ## @@ -1,4 +1,4 @@ -/* +/** Review Comment: This is a checkstyle problem, not a java doc problem. I see that package-info.java contains less content, so I will fix it. I will roll back this part of the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5152: YARN-11380. Fix hadoop-yarn-api module Java Doc Errors.
slfan1989 commented on code in PR #5152: URL: https://github.com/apache/hadoop/pull/5152#discussion_r1027713838 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/package-info.java: ## @@ -1,4 +1,4 @@ -/* +/** Review Comment: This is a checkstyle problem, not a java doc problem. I see that package-info.java contains less content, so I will fix it. I can roll back this part of the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5104: YARN-11158. Support (Create/Renew/Cancel) DelegationToken API's for Federation.
hadoop-yetus commented on PR #5104: URL: https://github.com/apache/hadoop/pull/5104#issuecomment-1321684954 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 58s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 19s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 0s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 40s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 94m 2s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5104/17/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5104 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5c14bb162250 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c025e7995d4e88c9c21ed95f823a182ccabf7303 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5104/17/testReport/ | | Max. process+thread count | 568 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5104/17/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated.
[jira] [Commented] (HADOOP-18534) Propose a mechanism to free the direct memory occupied by RPC Connections
[ https://issues.apache.org/jira/browse/HADOOP-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17636516#comment-17636516 ] xinqiu.hu commented on HADOOP-18534: [~slfan1989] If a connection continues to process requests until it is promoted to the old generation, and then expires, the next time the request comes, a new connection is created, the old connection will always hold the DirectByteBuffer without full gc. If the above process keeps Repeated and full gc does not occur, there will be a lot of DirectByteBuffer. But this is a very extreme case and it is difficult to happen, and fortunately, when the request is larger than the last cached direct memory, sun.nio.ch.Util always free the old cached direct memory and allocate a more suitable one, so there will only be one DirectByteBuffer per thread at the same time. Most requests are under 1000 bytes and there won't be a lot of connections at the same time, so this is a small problem. > Propose a mechanism to free the direct memory occupied by RPC Connections > - > > Key: HADOOP-18534 > URL: https://issues.apache.org/jira/browse/HADOOP-18534 > Project: Hadoop Common > Issue Type: Improvement > Components: rpc-server >Reporter: xinqiu.hu >Priority: Minor > > In the RPC Client, a thread called RpcRequestSender is responsible for > writing the connection request to the socket. Every time a request is sent, a > direct memory is applied for in sun.nio.ch.IOUtil#write() and cached. > If Connection and RpcRequestSender are promoted to the old generation, they > will not be recycled when full gc is not performed, resulting in the > DirectByteBuffer cached in sun.nio.ch.Util not being recycled. When the > memory occupied by DirectByteBuffer is too large, the jvm process may not > have the opportunity to do full gc and is killed. > Unfortunately, there is no easy way to free these DirectByteBuffers. > Perhaps, we can manually free these DirectByteBuffers by the following > methods when the Connection is closed. > {code} > private void freeDirectBuffer() { > try { > DirectBuffer buffer = (DirectBuffer) Util.getTemporaryDirectBuffer(1); > buffer.cleaner().clean(); > } catch (Throwable t) { > LOG.error("free direct memory error, connectionId: " + remoteId, t); > } > }{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4717: YARN-6946. Upgrade JUnit from 4 to 5 in hadoop-yarn-common
hadoop-yetus commented on PR #4717: URL: https://github.com/apache/hadoop/pull/4717#issuecomment-1321649048 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 104 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 12s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 1s | | trunk passed | | +1 :green_heart: | compile | 10m 41s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 9m 6s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 2m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 1s | | trunk passed | | +1 :green_heart: | javadoc | 2m 4s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 55s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 23s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 48s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 9m 58s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 9m 58s | | hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 0 new + 723 unchanged - 9 fixed = 723 total (was 732) | | +1 :green_heart: | compile | 9m 9s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 9m 9s | | hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 0 new + 639 unchanged - 4 fixed = 639 total (was 643) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 54s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4717/10/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 12 new + 196 unchanged - 171 fixed = 208 total (was 367) | | +1 :green_heart: | mvnsite | 1m 49s | | the patch passed | | +1 :green_heart: | javadoc | 1m 46s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 43s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 0m 49s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 1m 2s | | The patch does not generate ASF License warnings. | | | | 168m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4717/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4717 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 2323e162b162 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Pers