[jira] [Comment Edited] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC
[ https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523243#comment-16523243 ] Erik Krogen edited comment on HDFS-13609 at 6/26/18 5:54 AM: - Thanks [~shv] and [~linyiqun]! [~csun]: * You are right, according to [Oracle's conventions|http://www.oracle.com/technetwork/articles/java/index-137868.html]: {quote} Insert a blank comment line between the description and the list of tags, as shown. {quote} I was not aware of this, thanks for educating me. * Thanks for the catch. I have attached v004 patch to document the final changes. Given how minor the v003 -> v004 patch change is (Chao's two whitespace comments, I just committed this based on the +1s on v003. Note that the bad Jenkins run above is just because I committed the patch but didn't yet mark the issue as resolved. Fixed now. was (Author: xkrogen): Thanks [~shv] and [~linyiqun]! [~csun]: * You are right, according to [Oracle's conventions|http://www.oracle.com/technetwork/articles/java/index-137868.html]: {quote} Insert a blank comment line between the description and the list of tags, as shown. {quote} I was not aware of this, thanks for educating me. * Thanks for the catch. I have attached v004 patch to document the final changes. Given how minor the v003 -> v004 patch change is (Chao's two whitespace comments, I just committed this based on the +1s on v003. > [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via > RPC > - > > Key: HDFS-13609 > URL: https://issues.apache.org/jira/browse/HDFS-13609 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13609-HDFS-12943.000.patch, > HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, > HDFS-13609-HDFS-12943.003.patch, HDFS-13609-HDFS-12943.004.patch > > > See HDFS-13150 for the full design. > This JIRA is targetted at the NameNode-side changes to enable tailing > in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are > in the QuorumJournalManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC
[ https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-13609: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-12943 Status: Resolved (was: Patch Available) > [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via > RPC > - > > Key: HDFS-13609 > URL: https://issues.apache.org/jira/browse/HDFS-13609 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13609-HDFS-12943.000.patch, > HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, > HDFS-13609-HDFS-12943.003.patch, HDFS-13609-HDFS-12943.004.patch > > > See HDFS-13150 for the full design. > This JIRA is targetted at the NameNode-side changes to enable tailing > in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are > in the QuorumJournalManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC
[ https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523243#comment-16523243 ] Erik Krogen commented on HDFS-13609: Thanks [~shv] and [~linyiqun]! [~csun]: * You are right, according to [Oracle's conventions|http://www.oracle.com/technetwork/articles/java/index-137868.html]: {quote} Insert a blank comment line between the description and the list of tags, as shown. {quote} I was not aware of this, thanks for educating me. * Thanks for the catch. I have attached v004 patch to document the final changes. Given how minor the v003 -> v004 patch change is (Chao's two whitespace comments, I just committed this based on the +1s on v003. > [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via > RPC > - > > Key: HDFS-13609 > URL: https://issues.apache.org/jira/browse/HDFS-13609 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > Attachments: HDFS-13609-HDFS-12943.000.patch, > HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, > HDFS-13609-HDFS-12943.003.patch, HDFS-13609-HDFS-12943.004.patch > > > See HDFS-13150 for the full design. > This JIRA is targetted at the NameNode-side changes to enable tailing > in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are > in the QuorumJournalManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC
[ https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523236#comment-16523236 ] genericqa commented on HDFS-13609: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-13609 does not apply to HDFS-12943. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13609 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929150/HDFS-13609-HDFS-12943.004.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24492/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via > RPC > - > > Key: HDFS-13609 > URL: https://issues.apache.org/jira/browse/HDFS-13609 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > Attachments: HDFS-13609-HDFS-12943.000.patch, > HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, > HDFS-13609-HDFS-12943.003.patch, HDFS-13609-HDFS-12943.004.patch > > > See HDFS-13150 for the full design. > This JIRA is targetted at the NameNode-side changes to enable tailing > in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are > in the QuorumJournalManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC
[ https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-13609: --- Attachment: HDFS-13609-HDFS-12943.004.patch > [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via > RPC > - > > Key: HDFS-13609 > URL: https://issues.apache.org/jira/browse/HDFS-13609 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, namenode >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Major > Attachments: HDFS-13609-HDFS-12943.000.patch, > HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch, > HDFS-13609-HDFS-12943.003.patch, HDFS-13609-HDFS-12943.004.patch > > > See HDFS-13150 for the full design. > This JIRA is targetted at the NameNode-side changes to enable tailing > in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are > in the QuorumJournalManager. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-192) Create new SCMCommand to request a replication of a container
[ https://issues.apache.org/jira/browse/HDDS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523173#comment-16523173 ] Hudson commented on HDDS-192: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14481 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14481/]) HDDS-192:Create new SCMCommand to request a replication of a container. (bharat: rev 238fe00ad2692154f6a382f35735169ee5e4af2c) * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java * (edit) hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto * (add) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestReplicateContainerHandler.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/HeartbeatEndpointTask.java > Create new SCMCommand to request a replication of a container > - > > Key: HDDS-192 > URL: https://issues.apache.org/jira/browse/HDDS-192 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-192.001.patch, HDDS-192.002.patch > > > ReplicationManager needs to request replication/copy of container or deletion > of container. We have DeleteContainerCommand (a command which is part of the > datanode heartbeat response) but no command to request a copy of a container. > This patch adds the command with all the required protobuf > serialization/deserialization boilerplate to make it easier to review further > patches. > No business logic in this patch, but there is a unit test which checks if the > message is arrived to the datanode site from the scm side. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-192) Create new SCMCommand to request a replication of a container
[ https://issues.apache.org/jira/browse/HDDS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-192: Resolution: Fixed Status: Resolved (was: Patch Available) Thank You, [~elek] for contribution. I have committed this to the trunk. > Create new SCMCommand to request a replication of a container > - > > Key: HDDS-192 > URL: https://issues.apache.org/jira/browse/HDDS-192 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-192.001.patch, HDDS-192.002.patch > > > ReplicationManager needs to request replication/copy of container or deletion > of container. We have DeleteContainerCommand (a command which is part of the > datanode heartbeat response) but no command to request a copy of a container. > This patch adds the command with all the required protobuf > serialization/deserialization boilerplate to make it easier to review further > patches. > No business logic in this patch, but there is a unit test which checks if the > message is arrived to the datanode site from the scm side. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523138#comment-16523138 ] Xiao Chen commented on HDFS-13690: -- Thanks for working on this one Kitti, and Gabor for the review! Some comments: * It looks like this Jira will handle 2 types of commands: the key shell and the crypto admin. Let's please update the Jira title. * From usability, I think printing out the stacktrace by default is overwhelming. Take hdfs cli for exmaple, when NN is down, {{hadoop fs}} will just print 1 line of information including host:port. keyshel/cryptoadmin seems to give too much information. We can LOG.debug the full stacktrace in case people want to debug it (also following {{hadoop fs}} when {{HADOOP_ROOT_LOGGER=DEBUG,console}}, but print just a 1-liner by default. See {{Command#displayError}} for an example. * Why not handle {{(ex instanceof SocketTimeoutException || ex instanceof ConnectException)}} together? > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Attachments: HDFS-13690.001.patch, HDFS-13690.002.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523057#comment-16523057 ] Bharat Viswanadham edited comment on HDDS-173 at 6/26/18 2:11 AM: -- One more comment which I have missed is KeyValueHandler.java Line 258: kvContainer.writeUnlock(); it should be writeLock() and also we should acquire lock before checking openContainer, asin finally block we releaseWriteLock. Line 371: It should be as below, we should call getDeleteKey BlockID blockID = BlockID.getFromProtobuf( request.getDeleteKey().getBlockID()); was (Author: bharatviswa): One more comment which I have missed is Line 258: kvContainer.writeUnlock(); it should be writeLock() and also we should acquire lock before checking openContainer, asin finally block we releaseWriteLock. Line 371: It should be as below, we should call getDeleteKey BlockID blockID = BlockID.getFromProtobuf( request.getDeleteKey().getBlockID()); > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523057#comment-16523057 ] Bharat Viswanadham edited comment on HDDS-173 at 6/26/18 2:10 AM: -- One more comment which I have missed is Line 258: kvContainer.writeUnlock(); it should be writeLock() and also we should acquire lock before checking openContainer, asin finally block we releaseWriteLock. Line 371: It should be as below, we should call getDeleteKey BlockID blockID = BlockID.getFromProtobuf( request.getDeleteKey().getBlockID()); was (Author: bharatviswa): One more comment which I have missed is Line 258: kvContainer.writeUnlock(); it should be writeLock() and also we should acquire lock before checking openContainer, asin finally block we releaseWriteLock. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523057#comment-16523057 ] Bharat Viswanadham commented on HDDS-173: - One more comment which I have missed is Line 258: kvContainer.writeUnlock(); it should be writeLock() and also we should acquire lock before checking openContainer, asin finally block we releaseWriteLock. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523003#comment-16523003 ] genericqa commented on HDDS-173: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 9 new or modified test files. {color} | || || || || {color:brown} HDDS-48 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 3s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 50s{color} | {color:green} HDDS-48 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m 17s{color} | {color:red} root in HDDS-48 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} HDDS-48 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s{color} | {color:green} HDDS-48 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 6m 19s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} HDDS-48 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s{color} | {color:green} HDDS-48 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 21m 39s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 39s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 39s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 2m 38s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 11s{color} | {color:red} hadoop-hdds/common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} tools in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 50s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {col
[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-173: Attachment: (was: HDFS-FederationBlogPost.pdf) > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522971#comment-16522971 ] Bharat Viswanadham commented on HDDS-173: - [~hanishakoneru] Thanks for the updated patch. Few comments: # deleteContainer, we can hold lock till we remove from container map. # Checkstyle issues in TestKeyValueHandler.java some unused imports and identation issue in Line 98, 218 # TestKeyValueHandler. Already we have private VolumeSet volumeSet; private KeyValueHandler handler. So, no need of defining again in testHandlerCommandHandling. # KeyValueHandler.java . Unused import Line 21 and indentation issue in Line 145,541 and 561 > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch, > HDFS-FederationBlogPost.pdf > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522970#comment-16522970 ] genericqa commented on HDFS-13695: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 42m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 33m 14s{color} | {color:red} root generated 1 new + 1561 unchanged - 2 fixed = 1562 total (was 1563) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 15s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}170m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13695 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929087/HDFS-13695.v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3019bbc2a101 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c687a66 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/24490/artifact/out/diff-compile-javac-root.txt | | unit | http
[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-173: Attachment: HDFS-FederationBlogPost.pdf > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch, > HDFS-FederationBlogPost.pdf > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP
[ https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522956#comment-16522956 ] Chen Liang commented on HDFS-13699: --- Post a WIP patch. Tests still need to be added. > Add DFSClient sending handshake token to DataNode, and allow DataNode > overwrite downstream QOP > -- > > Key: HDFS-13699 > URL: https://issues.apache.org/jira/browse/HDFS-13699 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13699.WIP.001.patch > > > Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to > redirect the encrypt secret to DataNode. The encrypted message is the QOP > that client and NameNode have used. DataNode decrypts the message and enforce > the QOP for the client connection. Also, this Jira will also include > overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. > Namely, this is to allow inter-DN QOP that is different from client-DN QOP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP
[ https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13699: -- Attachment: HDFS-13699.WIP.001.patch > Add DFSClient sending handshake token to DataNode, and allow DataNode > overwrite downstream QOP > -- > > Key: HDFS-13699 > URL: https://issues.apache.org/jira/browse/HDFS-13699 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13699.WIP.001.patch > > > Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to > redirect the encrypt secret to DataNode. The encrypted message is the QOP > that client and NameNode have used. DataNode decrypts the message and enforce > the QOP for the client connection. Also, this Jira will also include > overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. > Namely, this is to allow inter-DN QOP that is different from client-DN QOP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522948#comment-16522948 ] genericqa commented on HDFS-13665: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 7s{color} | {color:green} HDFS-12943 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m 2s{color} | {color:red} root in HDFS-12943 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} HDFS-12943 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 6s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 20m 46s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 20m 46s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 1m 34s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 32s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13665 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929092/HDFS-13665-HDFS-12943.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ec4de96f95e1 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12943 / 292ccdc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/24491/artifact/out/branch-compile-root.txt | | findbugs | v3.1.0-RC1 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/24491/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/24491/artifact/out/patch-compile-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24491/testReport/ | | Max. process+thread count
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522894#comment-16522894 ] Hanisha Koneru commented on HDDS-173: - Thanks for the review, [~bharatviswa]. Addressed the comments in patch v02. {quote}1. This is added (optional ContainerType containerType = 20 )to ContainerCommandRequestProto, and it is only set during createContainer, This is used in HDDSDispatcher to send the request to specific handler. So, can we make default as KeyValueContainer, so that it is not needed to set for other requests? {quote} I have removed ContainerType from ContainerCommandRequestProto. This will only be specified in CreateContainerRequestProto now. I made changes to HddsDispatcher to get the containerType from the container object. {quote}7. Line 398: deleteChunk does not throw IOException, do we need this? Same comment applicable for other chunk related operations. {quote} {{ChunkInfo.getFromProtoBuf}} throws an IOException. So need to handle that exception here. {quote}9. In handleCreateContainer, do we need to check whether this container already exists before createContainer, by calling containerset.getContainer?{quote} {{containerSet.addContainer()}} checks that the containerID is not present already in the set. Updated to throw and propagate the error. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-173: Attachment: HDDS-173-HDDS-48.002.patch > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch, HDDS-173-HDDS-48.002.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522892#comment-16522892 ] genericqa commented on HDDS-94: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 24s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 34s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} hadoop-dist in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} acceptance-test in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | |
[jira] [Created] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP
Chen Liang created HDFS-13699: - Summary: Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP Key: HDFS-13699 URL: https://issues.apache.org/jira/browse/HDFS-13699 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chen Liang Assignee: Chen Liang Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to redirect the encrypt secret to DataNode. The encrypted message is the QOP that client and NameNode have used. DataNode decrypts the message and enforce the QOP for the client connection. Also, this Jira will also include overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. Namely, this is to allow inter-DN QOP that is different from client-DN QOP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP
[ https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13699 started by Chen Liang. - > Add DFSClient sending handshake token to DataNode, and allow DataNode > overwrite downstream QOP > -- > > Key: HDFS-13699 > URL: https://issues.apache.org/jira/browse/HDFS-13699 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > > Given the other Jiras under HDFS-13541, this Jira is to allow DFSClient to > redirect the encrypt secret to DataNode. The encrypted message is the QOP > that client and NameNode have used. DataNode decrypts the message and enforce > the QOP for the client connection. Also, this Jira will also include > overwriting downstream QOP, as mentioned in the HDFS-13541 design doc. > Namely, this is to allow inter-DN QOP that is different from client-DN QOP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522872#comment-16522872 ] Ian Pickering commented on HDFS-13695: -- Thanks [~elgoiri] for the comment, that does make sense. I can open another patch for commons for GenericTestUtils as an up to date version of HADOOP-14624. > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch, > HDFS-13695.v3.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522865#comment-16522865 ] Íñigo Goiri commented on HDFS-13695: Not sure about {{GenericTestUtils}}. We should do that in the commons side and probably not use the full package. > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch, > HDFS-13695.v3.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13665: Affects Version/s: HDFS-12943 > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13665-HDFS-12943.000.patch > > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13665: Attachment: HDFS-13665-HDFS-12943.000.patch > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13665-HDFS-12943.000.patch > > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13665: Status: Patch Available (was: In Progress) > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13665-HDFS-12943.000.patch > > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13665 started by Plamen Jeliazkov. --- > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13665-HDFS-12943.000.patch > > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522840#comment-16522840 ] genericqa commented on HDDS-193: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s{color} | {color:red} hadoop-hdds/server-scm generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdds/server-scm | | | Dead store to datanodeDetails in org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.dispatch(StorageContainerDatanodeProtocolProtos$SCMHeartbeatRequestProto) At SCMDatanodeHeartbeatDispatcher.java:org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.dispatch(StorageContainerDatanodeProtocolProtos$SCMHeartbeatRequestProto) At SCMDatanodeHeartbeatDispatcher.java:[line 62] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-193 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929083/HDDS-193.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a99856924bc2 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a55d6bb | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDDS-Build/358/arti
[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ian Pickering updated HDFS-13695: - Attachment: HDFS-13695.v3.patch > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch, > HDFS-13695.v3.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522813#comment-16522813 ] genericqa commented on HDFS-13695: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDFS-13695 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13695 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929084/HDFS-13695.v2.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24489/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-194) Remove NodePoolManager and node pool handling from SCM
Elek, Marton created HDDS-194: - Summary: Remove NodePoolManager and node pool handling from SCM Key: HDDS-194 URL: https://issues.apache.org/jira/browse/HDDS-194 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: SCM Reporter: Elek, Marton Assignee: Elek, Marton Fix For: 0.2.1 The current code use NodePoolManager and ContainerSupervisor to group the nodes to smaller groups (pools) and handle the pull based node reports group by group. But this code is not used any more as we switch back to use a push based model. In the datanode the reports could be handled by the specific report handlers, and in the scm side the reports will be processed by the SCMHeartbeatDispatcher which will send the events to the EventQueue. As of now the NodePool abstraction could be removed from the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-169) Add Volume IO Stats
[ https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522806#comment-16522806 ] Xiaoyu Yao commented on HDDS-169: - Thanks [~bharatviswa] for the update. +1 for v2 patch. > Add Volume IO Stats > > > Key: HDDS-169 > URL: https://issues.apache.org/jira/browse/HDDS-169 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-169-HDDS-48.00.patch, HDDS-169-HDDS-48.01.patch, > HDDS-169-HDDS-48.02.patch > > > This Jira is used to add VolumeIO stats in the datanode. > Add IO calculations for Chunk operations. > readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522799#comment-16522799 ] Xiao Chen commented on HDFS-13697: -- Hi [~zvenczel], thanks for reporting the issue and providing a fix. Great work on identifying this! RCA and fix makes sense to me, some comments mainly on test: - It looks like we can just setup the test by setting {{hadoop.kms.blacklist.DECRYPT_EEK}} to oozie on the {{kmsConf}} object, eliminating the need for a customized xml. - I agree having the doAs in {{HdfsKMSUtil#decryptEncryptedDataEncryptionKey}} is cleaner than doing this in the callers. Let's have the test to cover both the input and output stream though. This can be done by doing open on the file once it's created. - Let's use the least number of parameters for objection construction. I guess the {{DFSClient}} ctor is to bypass the client cache? Please comment about it if so. We can cast the stream returned by {{DFSClient#create}} to a {{DFSOutputStream}}, then pass it in to {{createWrappedOutputStream}}. - No need to really write to the stream to trigger decrypt. - The comment {{set up KMS not to allow oozie service to decrypt encryption keys}} is technically not accurate. 'encryption keys' is a vague term, we can call it edeks (hdfs term) or eek (kms term). Suggest to use {{set up KMS but blacklist oozie service to decrypt EDEKs}} > EDEK decrypt fails due to proxy user being lost because of empty > AccessControllerContext > > > Key: HDFS-13697 > URL: https://issues.apache.org/jira/browse/HDFS-13697 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch > > > While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack > might not have doAs privileged execution call (in the DFSClient for example). > This results in loosing the proxy user from UGI as UGI.getCurrentUser finds > no AccessControllerContext and does a re-login for the login user only. > This can cause the following for example: if we have set up the oozie user to > be entitled to perform actions on behalf of example_user but oozie is > forbidden to decrypt any EDEK (for security reasons), due to the above issue, > example_user entitlements are lost from UGI and the following error is > reported: > {code} > [0] > SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] > JOB[0020905-180313191552532-oozie-oozi-W] > ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting > action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message > [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with > ACL name [encrypted_key]!!] > org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not > authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) > at > org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) > at > org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) > at > org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:286) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User > [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name > [encrypted_key]!! > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflec
[jira] [Commented] (HDDS-191) Queue SCMCommands via EventQueue in SCM
[ https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522791#comment-16522791 ] Hudson commented on HDDS-191: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14477 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14477/]) HDDS-191. Queue SCMCommands via EventQueue in SCM. Contributed by Elek, (aengineer: rev a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0) * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandForDatanode.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java > Queue SCMCommands via EventQueue in SCM > --- > > Key: HDDS-191 > URL: https://issues.apache.org/jira/browse/HDDS-191 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-191.001.patch, HDDS-191.002.patch > > > As a first step towards to a ReplicationManager I propose to introduce the > EventQueue to the StorageContainerManager and enable to send SCMCommands via > EventQueue. > With this separation the ReplicationManager could easily send the appropriate > SCMCommand (eg. CopyContainer) to the EventQueue without hard dependency to > the SCMNodeManager. (And later we can introduce the CommandWatchers without > modifying the ReplicationManager part) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package
[ https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ian Pickering updated HDFS-13695: - Attachment: HDFS-13695.v2.patch > Move logging to slf4j in HDFS package > - > > Key: HDFS-13695 > URL: https://issues.apache.org/jira/browse/HDFS-13695 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Giovanni Matteo Fumarola >Assignee: Ian Pickering >Priority: Major > Attachments: HDFS-13695.v1.patch, HDFS-13695.v2.patch > > > Move logging to slf4j in HDFS package -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-191) Queue SCMCommands via EventQueue in SCM
[ https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-191: -- Resolution: Fixed Status: Resolved (was: Patch Available) [~elek] Thank you for the contribution. I have committed this to trunk > Queue SCMCommands via EventQueue in SCM > --- > > Key: HDDS-191 > URL: https://issues.apache.org/jira/browse/HDDS-191 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-191.001.patch, HDDS-191.002.patch > > > As a first step towards to a ReplicationManager I propose to introduce the > EventQueue to the StorageContainerManager and enable to send SCMCommands via > EventQueue. > With this separation the ReplicationManager could easily send the appropriate > SCMCommand (eg. CopyContainer) to the EventQueue without hard dependency to > the SCMNodeManager. (And later we can introduce the CommandWatchers without > modifying the ReplicationManager part) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Status: Patch Available (was: Open) > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
[ https://issues.apache.org/jira/browse/HDDS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-193: -- Attachment: HDDS-193.001.patch > Make Datanode heartbeat dispatcher in SCM event based > - > > Key: HDDS-193 > URL: https://issues.apache.org/jira/browse/HDDS-193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-193.001.patch > > > HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat > report parts to the appropriate listeners. I propose to make it EventQueue > based to handle/monitor these async calls in the same way as the other events. > Report handlers would subscribe to the specific events to process the > information. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522764#comment-16522764 ] genericqa commented on HDDS-94: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 22s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 40s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/acceptance-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-dist in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} acceptance-test in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | |
[jira] [Updated] (HDFS-13617) Allow wrapping NN QOP into token in encrypted message
[ https://issues.apache.org/jira/browse/HDFS-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13617: -- Attachment: HDFS-13617.002.patch > Allow wrapping NN QOP into token in encrypted message > - > > Key: HDFS-13617 > URL: https://issues.apache.org/jira/browse/HDFS-13617 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13617.001.patch, HDFS-13617.002.patch > > > This Jira allows NN to configurably wrap the QOP it has established with the > client into the token message sent back to the client. The QOP is sent back > in encrypted message, using BlockAccessToken encryption key as the key. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13617) Allow wrapping NN QOP into token in encrypted message
[ https://issues.apache.org/jira/browse/HDFS-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522752#comment-16522752 ] Chen Liang commented on HDFS-13617: --- Rebased with v002 patch. > Allow wrapping NN QOP into token in encrypted message > - > > Key: HDFS-13617 > URL: https://issues.apache.org/jira/browse/HDFS-13617 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13617.001.patch, HDFS-13617.002.patch > > > This Jira allows NN to configurably wrap the QOP it has established with the > client into the token message sent back to the client. The QOP is sent back > in encrypted message, using BlockAccessToken encryption key as the key. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-191) Queue SCMCommands via EventQueue in SCM
[ https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522747#comment-16522747 ] Anu Engineer commented on HDDS-191: --- +1 , I will commit this now. There is a minor comment, we can make the EventQueue a member of SCM instead of local variable. We can do that in a later patch like HDDS-193. > Queue SCMCommands via EventQueue in SCM > --- > > Key: HDDS-191 > URL: https://issues.apache.org/jira/browse/HDDS-191 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-191.001.patch, HDDS-191.002.patch > > > As a first step towards to a ReplicationManager I propose to introduce the > EventQueue to the StorageContainerManager and enable to send SCMCommands via > EventQueue. > With this separation the ReplicationManager could easily send the appropriate > SCMCommand (eg. CopyContainer) to the EventQueue without hard dependency to > the SCMNodeManager. (And later we can introduce the CommandWatchers without > modifying the ReplicationManager part) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-193) Make Datanode heartbeat dispatcher in SCM event based
Elek, Marton created HDDS-193: - Summary: Make Datanode heartbeat dispatcher in SCM event based Key: HDDS-193 URL: https://issues.apache.org/jira/browse/HDDS-193 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: SCM Reporter: Elek, Marton Assignee: Elek, Marton Fix For: 0.2.1 HDDS-163 introduced a new dispatcher in the SCM side to send the heartbeat report parts to the appropriate listeners. I propose to make it EventQueue based to handle/monitor these async calls in the same way as the other events. Report handlers would subscribe to the specific events to process the information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522732#comment-16522732 ] Sandeep Nemuri commented on HDDS-94: [^HDDS-94.004.patch] updated changes for new docker files (HDDS-177) > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Nemuri updated HDDS-94: --- Attachment: HDDS-94.004.patch > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch, > HDDS-94.004.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13688: -- Attachment: HDFS-13688-HDFS-12943.WIP.002.patch > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13688-HDFS-12943.WIP.002.patch, > HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522703#comment-16522703 ] Chen Liang commented on HDFS-13688: --- Thanks [~linyiqun] for the review, appreciate it! bq. LastSeenId isn't tracked for both ANN and SBN. Thanks for checking with design doc! The idea in the patch was based on some offline discussion we had so there seems a bit difference there. After client sees the ANN's state id. There were two ideas we evaluated. One is that client keeps sending msync calls to Observer, Observer returns immediately with its state id, client only returns until the observer state id catches up. A downside here is that multiple RPC calls are made, abusing RPC queue and handler CPU time on server side. The other approach is that client makes one single call, and Observer side will block the call only after the state id catches up. Since it is observer side making sure the id catches up, client side no longer needs to keep track of observer id. A downside here is that server needs more thread resources (i.e. the executor introduced), but I think this is a fair tradeoff compared to the other way. bq. syncTnxId passed in msync call large than LastAppliedOrWrittenTxId in ANN. Need to throw the exception? Fixed bq. The condition check should be HAServiceState.ACTIVE.toString().equals(namesystem.getHAState()? This led me to think of in what situation will msync be called on standby. Seems this happens only when there is some role transition happening, I will need to think of if all transition cases are properly handled here. Right now I'm inclined to believe the change you suggested should be sufficient. Fixed in the WIP.v002 patch. This is actually an interesting a point, thanks for bringing it up! bq. Why not just pass the msyncExecutor as null there? Fixed. > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13688-HDFS-12943.WIP.002.patch, > HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522689#comment-16522689 ] Chao Sun commented on HDFS-12976: - Another thought I'm thinking is to add a flag in {{RpcRequestHeaderProto}} and then check this flag on the NameNode server side, using {{AlignmentContext}}. With this approach, no change on {{ConfiguredFailoverProxyProvider}} will be required. Let me know your opinion on this. > Introduce ObserverReadProxyProvider > --- > > Key: HDFS-12976 > URL: https://issues.apache.org/jira/browse/HDFS-12976 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-12976-HDFS-12943.000.patch, > HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, > HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch > > > {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} > interface and be able to submit read requests to ANN and SBN(s). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12976) Introduce ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522689#comment-16522689 ] Chao Sun edited comment on HDFS-12976 at 6/25/18 7:03 PM: -- Another idea I'm thinking is to add a flag in {{RpcRequestHeaderProto}} and then check this flag on the NameNode server side, using {{AlignmentContext}}. With this approach, no change on {{ConfiguredFailoverProxyProvider}} will be required. Let me know your opinion on this. was (Author: csun): Another thought I'm thinking is to add a flag in {{RpcRequestHeaderProto}} and then check this flag on the NameNode server side, using {{AlignmentContext}}. With this approach, no change on {{ConfiguredFailoverProxyProvider}} will be required. Let me know your opinion on this. > Introduce ObserverReadProxyProvider > --- > > Key: HDFS-12976 > URL: https://issues.apache.org/jira/browse/HDFS-12976 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Konstantin Shvachko >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-12976-HDFS-12943.000.patch, > HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, > HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch > > > {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} > interface and be able to submit read requests to ANN and SBN(s). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13688: -- Attachment: (was: HDFS-13688-HDFS-12943.WIP.002.patch) > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-192) Create new SCMCommand to request a replication of a container
[ https://issues.apache.org/jira/browse/HDDS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522678#comment-16522678 ] Bharat Viswanadham commented on HDDS-192: - Hi [~elek] +1 LGTM. I think Jenkins failure is not related to this patch, it is failing during installing packages during yarn UI building. [INFO] Running 'bower install' in /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp [ERROR] bower ember-load-initializers#0.1.7 EINVRES Request to https://bower.herokuapp.com/packages/ember-load-initializers failed with 502 > Create new SCMCommand to request a replication of a container > - > > Key: HDDS-192 > URL: https://issues.apache.org/jira/browse/HDDS-192 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-192.001.patch, HDDS-192.002.patch > > > ReplicationManager needs to request replication/copy of container or deletion > of container. We have DeleteContainerCommand (a command which is part of the > datanode heartbeat response) but no command to request a copy of a container. > This patch adds the command with all the required protobuf > serialization/deserialization boilerplate to make it easier to review further > patches. > No business logic in this patch, but there is a unit test which checks if the > message is arrived to the datanode site from the scm side. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522675#comment-16522675 ] Bharat Viswanadham edited comment on HDDS-173 at 6/25/18 6:47 PM: -- Hi [~hanishakoneru] Thanks for the patch. Few comments I have are: 1. This is added (optional ContainerType containerType = 20 )to ContainerCommandRequestProto, and it is only set during createContainer, This is used in HDDSDispatcher to send the request to specific handler. So, can we make default as KeyValueContainer, so that it is not needed to set for other requests? 2. Metrics intialization, I think this should be moved to Handler or same one should be passed to handler, so that metrics can be incremented. 3. Handler.java private Map handlers; This variable is not intialized, i think we might get NPE when we add handlers. 4. In handleReadContainer, below code might return null, if containerId is not in map. So, we should null case. KeyValueContainer kvContainer = (KeyValueContainer) containerSet.getContainer(containerID); 5. I think above comment applicable for all other requests. Or the question is can these requests come before create Container? 6. In deletecontainer, we should delete the container from containerset container map. 7. Line 398: deleteChunk does not throw IOException, do we need this? Same comment applicable for other chunk related operations. 8. If we have containerType in ContainerCommandRequestProto, I think we don't need containerType in CreateContainerRequestProto. 9. In handleCreateContainer, do we need to check whether this container already exists before createContainer, by calling containerset.getContainer? 10. getFromProtoBuf in KeyValueContainerData, I think now we dont require this, as createCOntainer Request type is changed. was (Author: bharatviswa): Hi [~hanishakoneru] Thanks for the patch. Few comments I have are: 1. This is added(optional ContainerType containerType = 20;) to ContainerCommandRequestProto, and it is only set during createContainer, This is used in HDDSDispatcher to send the request to specific handler. So, can we make default as KeyValueContainer, so that it is not needed to set for other requests? 2. Metrics intialization, I think this should be moved to Handler or same one should be passed to handler, so that metrics can be incremented. 3. Handler.java private Map handlers; This variable is not intialized, i think we might get NPE when we add handlers. 4. In handleReadContainer, below code might return null, if containerId is not in map. So, we should null case. KeyValueContainer kvContainer = (KeyValueContainer) containerSet.getContainer(containerID); 5. I think above comment applicable for all other requests. Or the question is can these requests come before create Container? 6. In deletecontainer, we should delete the container from containerset container map. 7. Line 398: deleteChunk does not throw IOException, do we need this? Same comment applicable for other chunk related operations. 8. If we have containerType in ContainerCommandRequestProto, I think we don't need containerType in CreateContainerRequestProto. 9. In handleCreateContainer, do we need to check whether this container already exists before createContainer, by calling containerset.getContainer? 10. getFromProtoBuf in KeyValueContainerData, I think now we dont require this, as createCOntainer Request type is changed. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522675#comment-16522675 ] Bharat Viswanadham edited comment on HDDS-173 at 6/25/18 6:46 PM: -- Hi [~hanishakoneru] Thanks for the patch. Few comments I have are: 1. This is added(optional ContainerType containerType = 20;) to ContainerCommandRequestProto, and it is only set during createContainer, This is used in HDDSDispatcher to send the request to specific handler. So, can we make default as KeyValueContainer, so that it is not needed to set for other requests? 2. Metrics intialization, I think this should be moved to Handler or same one should be passed to handler, so that metrics can be incremented. 3. Handler.java private Map handlers; This variable is not intialized, i think we might get NPE when we add handlers. 4. In handleReadContainer, below code might return null, if containerId is not in map. So, we should null case. KeyValueContainer kvContainer = (KeyValueContainer) containerSet.getContainer(containerID); 5. I think above comment applicable for all other requests. Or the question is can these requests come before create Container? 6. In deletecontainer, we should delete the container from containerset container map. 7. Line 398: deleteChunk does not throw IOException, do we need this? Same comment applicable for other chunk related operations. 8. If we have containerType in ContainerCommandRequestProto, I think we don't need containerType in CreateContainerRequestProto. 9. In handleCreateContainer, do we need to check whether this container already exists before createContainer, by calling containerset.getContainer? 10. getFromProtoBuf in KeyValueContainerData, I think now we dont require this, as createCOntainer Request type is changed. was (Author: bharatviswa): Hi [~hanishakoneru] Thanks for the patch. Few comments I have are: 1. This is added to ContainerCommandRequestProto, and it is only set during createContainer, optional ContainerType containerType = 20; This is used in HDDSDispatcher to send the request to specific handler. So, can we make default as KeyValueContainer, so that it is not needed to set for other requests? 2. Metrics intialization, I think this should be moved to Handler or same one should be passed to handler, so that metrics can be incremented. 3. Handler.java private Map handlers; This variable is not intialized, i think we might get NPE when we add handlers. 4. In handleReadContainer, below code might return null, if containerId is not in map. So, we should null case. KeyValueContainer kvContainer = (KeyValueContainer) containerSet.getContainer(containerID); 5. I think above comment applicable for all other requests. Or the question is can these requests come before create Container? 6. In deletecontainer, we should delete the container from containerset container map. 7. Line 398: deleteChunk does not throw IOException, do we need this? Same comment applicable for other chunk related operations. 8. If we have containerType in ContainerCommandRequestProto, I think we don't need containerType in CreateContainerRequestProto. 9. In handleCreateContainer, do we need to check whether this container already exists before createContainer, by calling containerset.getContainer? 10. getFromProtoBuf in KeyValueContainerData, I think now we dont require this, as createCOntainer Request type is changed. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13688: -- Attachment: HDFS-13688-HDFS-12943.WIP.002.patch > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13688-HDFS-12943.WIP.002.patch, > HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design
[ https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522675#comment-16522675 ] Bharat Viswanadham commented on HDDS-173: - Hi [~hanishakoneru] Thanks for the patch. Few comments I have are: 1. This is added to ContainerCommandRequestProto, and it is only set during createContainer, optional ContainerType containerType = 20; This is used in HDDSDispatcher to send the request to specific handler. So, can we make default as KeyValueContainer, so that it is not needed to set for other requests? 2. Metrics intialization, I think this should be moved to Handler or same one should be passed to handler, so that metrics can be incremented. 3. Handler.java private Map handlers; This variable is not intialized, i think we might get NPE when we add handlers. 4. In handleReadContainer, below code might return null, if containerId is not in map. So, we should null case. KeyValueContainer kvContainer = (KeyValueContainer) containerSet.getContainer(containerID); 5. I think above comment applicable for all other requests. Or the question is can these requests come before create Container? 6. In deletecontainer, we should delete the container from containerset container map. 7. Line 398: deleteChunk does not throw IOException, do we need this? Same comment applicable for other chunk related operations. 8. If we have containerType in ContainerCommandRequestProto, I think we don't need containerType in CreateContainerRequestProto. 9. In handleCreateContainer, do we need to check whether this container already exists before createContainer, by calling containerset.getContainer? 10. getFromProtoBuf in KeyValueContainerData, I think now we dont require this, as createCOntainer Request type is changed. > Refactor Dispatcher and implement Handler for new ContainerIO design > > > Key: HDDS-173 > URL: https://issues.apache.org/jira/browse/HDDS-173 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-173-HDDS-48.001.patch > > > Dispatcher will pass the ContainerCommandRequests to the corresponding > Handler based on the ContainerType. Each ContainerType will have its own > Handler. The Handler class will process the message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-192) Create new SCMCommand to request a replication of a container
[ https://issues.apache.org/jira/browse/HDDS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522669#comment-16522669 ] genericqa commented on HDDS-192: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 23m 31s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 21m 59s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 59s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 59s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} server-scm in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 32s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.TestStorageContainerManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-192 | | JIRA Pa
[jira] [Updated] (HDDS-59) Ozone client should update blocksize in OM for sub-block writes
[ https://issues.apache.org/jira/browse/HDDS-59?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-59: -- Summary: Ozone client should update blocksize in OM for sub-block writes (was: Ozone client should update blocksize in KSM for sub-block writes) > Ozone client should update blocksize in OM for sub-block writes > --- > > Key: HDDS-59 > URL: https://issues.apache.org/jira/browse/HDDS-59 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-59.001.patch > > > Currently ozone client allocates block of the required length from SCM > through KSM. > However it might happen due to error cases or because of small writes that > the allocated block is not completely written. > In these cases, client should update the KSM with the length of the block. > This will help in error cases as well as cases where client does not write > the complete block to Ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-191) Queue SCMCommands via EventQueue in SCM
[ https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522600#comment-16522600 ] genericqa commented on HDDS-191: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 50s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDDS-191 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929048/HDDS-191.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 555b40a58610 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1ba4e62 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/354/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 1) | | modules | C: hadoop-hdd
[jira] [Updated] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Nemuri updated HDDS-94: --- Attachment: HDDS-94.003.patch > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-94) Change ozone datanode command to start the standalone datanode plugin
[ https://issues.apache.org/jira/browse/HDDS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522584#comment-16522584 ] Sandeep Nemuri commented on HDDS-94: Missed to remove namenode dependency from robot file which failed the Acceptance test. Attaching the v3 patch for review [^HDDS-94.003.patch] > Change ozone datanode command to start the standalone datanode plugin > - > > Key: HDDS-94 > URL: https://issues.apache.org/jira/browse/HDDS-94 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Elek, Marton >Assignee: Sandeep Nemuri >Priority: Major > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-94.001.patch, HDDS-94.002.patch, HDDS-94.003.patch > > > The current ozone datanode command starts the regular hdfs datanode with an > enabled HddsDatanodeService as a datanode plugin. > The goal is to start only the HddsDatanodeService.java (main function is > already there but GenericOptionParser should be adopted). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522539#comment-16522539 ] genericqa commented on HDDS-167: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDDS-167 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDDS-167 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929050/HDDS-167.04.patch | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/355/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Rename KeySpaceManager to OzoneManager > -- > > Key: HDDS-167 > URL: https://issues.apache.org/jira/browse/HDDS-167 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, > HDDS-167.04.patch > > > The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some > more changes needed to complete the rename everywhere e.g. > - command-line > - documentation > - unit tests > - Acceptance tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-191) Queue SCMCommands via EventQueue in SCM
[ https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522519#comment-16522519 ] Elek, Marton commented on HDDS-191: --- Thanks you the review [~anu] I added two precondition checks to the constructor of CommandForDatanode for the fields and one for SCMNodeManager.onMessage. (+ the unused import is also fixed) > Queue SCMCommands via EventQueue in SCM > --- > > Key: HDDS-191 > URL: https://issues.apache.org/jira/browse/HDDS-191 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-191.001.patch, HDDS-191.002.patch > > > As a first step towards to a ReplicationManager I propose to introduce the > EventQueue to the StorageContainerManager and enable to send SCMCommands via > EventQueue. > With this separation the ReplicationManager could easily send the appropriate > SCMCommand (eg. CopyContainer) to the EventQueue without hard dependency to > the SCMNodeManager. (And later we can introduce the CommandWatchers without > modifying the ReplicationManager part) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-167) Rename KeySpaceManager to OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522516#comment-16522516 ] Arpit Agarwal commented on HDDS-167: The v04 patch fixes some acceptance test failures. > Rename KeySpaceManager to OzoneManager > -- > > Key: HDDS-167 > URL: https://issues.apache.org/jira/browse/HDDS-167 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, > HDDS-167.04.patch > > > The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some > more changes needed to complete the rename everywhere e.g. > - command-line > - documentation > - unit tests > - Acceptance tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-167) Rename KeySpaceManager to OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522516#comment-16522516 ] Arpit Agarwal edited comment on HDDS-167 at 6/25/18 4:48 PM: - The v04 patch fixes some acceptance test failures by updating robot framework files. Other acceptance tests are still failing, looking into them. was (Author: arpitagarwal): The v04 patch fixes some acceptance test failures. > Rename KeySpaceManager to OzoneManager > -- > > Key: HDDS-167 > URL: https://issues.apache.org/jira/browse/HDDS-167 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, > HDDS-167.04.patch > > > The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some > more changes needed to complete the rename everywhere e.g. > - command-line > - documentation > - unit tests > - Acceptance tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDDS-167: --- Attachment: HDDS-167.04.patch > Rename KeySpaceManager to OzoneManager > -- > > Key: HDDS-167 > URL: https://issues.apache.org/jira/browse/HDDS-167 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-167.01.patch, HDDS-167.02.patch, HDDS-167.03.patch, > HDDS-167.04.patch > > > The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some > more changes needed to complete the rename everywhere e.g. > - command-line > - documentation > - unit tests > - Acceptance tests -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13665: Description: In HDFS-13399 we addressed a race condition in AlignmentContext processing where the RPC response would assign a transactionId independently of the transactions own processing, resulting in a stateId response that was lower than expected. However this caused us to serialize the RpcResponse twice in order to address the header field change. See here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 And here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 At the end it was agreed upon to move the logic of Server.setupResponse into Server.doResponse directly. was: In HDFS-13399 we addressed a race condition in AlignmentContext processing where the RPC response would assign a transactionId independently of the transactions own processing, resulting in a stateId response that was lower than expected. See here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 And here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 At the end it was agreed upon to move the logic of Server.setupResponse into Server.doResponse directly. > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. However this caused us to serialize the RpcResponse twice in > order to address the header field change. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13665) Move RPC response serialization into Server.doResponse
[ https://issues.apache.org/jira/browse/HDFS-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13665: Description: In HDFS-13399 we addressed a race condition in AlignmentContext processing where the RPC response would assign a transactionId independently of the transactions own processing, resulting in a stateId response that was lower than expected. See here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 And here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 At the end it was agreed upon to move the logic of Server.setupResponse into Server.doResponse directly. was: In HDFS-13399 we addressed a race condition in AlignmentContext processing where the RPC response would assign a transactionId independently of the transactions own processing, resulting in a stateId response that was lower than expected. See here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 And here: https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 At the end if was agreed upon to move the logic of Server.setupResponse into Server.doResponse directly. > Move RPC response serialization into Server.doResponse > -- > > Key: HDFS-13665 > URL: https://issues.apache.org/jira/browse/HDFS-13665 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > > In HDFS-13399 we addressed a race condition in AlignmentContext processing > where the RPC response would assign a transactionId independently of the > transactions own processing, resulting in a stateId response that was lower > than expected. > See here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16464279&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16464279 > And here: > https://issues.apache.org/jira/browse/HDFS-13399?focusedCommentId=16498660&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16498660 > At the end it was agreed upon to move the logic of Server.setupResponse into > Server.doResponse directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-191) Queue SCMCommands via EventQueue in SCM
[ https://issues.apache.org/jira/browse/HDDS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-191: -- Attachment: HDDS-191.002.patch > Queue SCMCommands via EventQueue in SCM > --- > > Key: HDDS-191 > URL: https://issues.apache.org/jira/browse/HDDS-191 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-191.001.patch, HDDS-191.002.patch > > > As a first step towards to a ReplicationManager I propose to introduce the > EventQueue to the StorageContainerManager and enable to send SCMCommands via > EventQueue. > With this separation the ReplicationManager could easily send the appropriate > SCMCommand (eg. CopyContainer) to the EventQueue without hard dependency to > the SCMNodeManager. (And later we can introduce the CommandWatchers without > modifying the ReplicationManager part) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-192) Create new SCMCommand to request a replication of a container
[ https://issues.apache.org/jira/browse/HDDS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522485#comment-16522485 ] Elek, Marton commented on HDDS-192: --- Thank you very much the careful check [~bharatviswa] I fixed all the items (comments/ContainerId->long/generic) and uploaded the patch. > Create new SCMCommand to request a replication of a container > - > > Key: HDDS-192 > URL: https://issues.apache.org/jira/browse/HDDS-192 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-192.001.patch, HDDS-192.002.patch > > > ReplicationManager needs to request replication/copy of container or deletion > of container. We have DeleteContainerCommand (a command which is part of the > datanode heartbeat response) but no command to request a copy of a container. > This patch adds the command with all the required protobuf > serialization/deserialization boilerplate to make it easier to review further > patches. > No business logic in this patch, but there is a unit test which checks if the > message is arrived to the datanode site from the scm side. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-189) Update HDDS to start OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDDS-189: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~elek], I've committed this. > Update HDDS to start OzoneManager > - > > Key: HDDS-189 > URL: https://issues.apache.org/jira/browse/HDDS-189 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-189.01.patch, HDDS-189.02.patch > > > HDDS-167 is renaming KeySpaceManager to OzoneManager. > So let's update Hadoop Runner accordingly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-192) Create new SCMCommand to request a replication of a container
[ https://issues.apache.org/jira/browse/HDDS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-192: -- Attachment: HDDS-192.002.patch > Create new SCMCommand to request a replication of a container > - > > Key: HDDS-192 > URL: https://issues.apache.org/jira/browse/HDDS-192 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: SCM >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-192.001.patch, HDDS-192.002.patch > > > ReplicationManager needs to request replication/copy of container or deletion > of container. We have DeleteContainerCommand (a command which is part of the > datanode heartbeat response) but no command to request a copy of a container. > This patch adds the command with all the required protobuf > serialization/deserialization boilerplate to make it easier to review further > patches. > No business logic in this patch, but there is a unit test which checks if the > message is arrived to the datanode site from the scm side. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522425#comment-16522425 ] Kitti Nanasi commented on HDFS-13690: - Thanks for the comment [~gabor.bota]! I modified the code according to your comment in the v002 patch. > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Attachments: HDFS-13690.001.patch, HDFS-13690.002.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13121) NPE when request file descriptors when SC read
[ https://issues.apache.org/jira/browse/HDFS-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522419#comment-16522419 ] Zsolt Venczel commented on HDFS-13121: -- Thank you very much for taking a look [~jojochuang]! The current solution throws an IOException at BlockReaderFactory.java#614 which is handled at BlockReaderFactory.java#631 where in case of this problem the following is logged: {code:java} 2018-06-25 16:58:09,777 [main] WARN impl.BlockReaderFactory (BlockReaderFactory.java:requestFileDescriptors(631)) - BlockReaderFactory(fileName=null, block=BP-778337774-127.0.1.1-1529938688855:blk_1073741825_1001): error creating ShortCircuitReplica. java.io.IOException: the datanode DatanodeInfoWithStorage[127.0.0.1:41377,DS-83cd5e5c-95bb-4b16-a438-33dfc05608d8,DISK] failed to pass a file descriptor (might have reached open file limit). at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:614) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:553) at org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache.testRequestFileDescriptorsWhenULimit(TestShortCircuitCache.java:904) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) {code} The warning message also states that the box "might have reached open file limit" as currently I'm not sure how to tell exactly why the native code failed to acquire new file descriptors. There's also an explanation, provided by HDFS-5810, in this scenario why a null is returned by the function: {code} // This indicates an error reading from disk, or a format error. Since // it's not a socket communication problem, we return null rather than // throwing an exception. {code} This approach in my understanding is aligned with the defined workflow describing short circuit replication handling. Do you have anything in mind how to improve the handling of such a scenario? > NPE when request file descriptors when SC read > -- > > Key: HDFS-13121 > URL: https://issues.apache.org/jira/browse/HDFS-13121 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.0.0 >Reporter: Gang Xie >Assignee: Zsolt Venczel >Priority: Minor > Attachments: HDFS-13121.01.patch, HDFS-13121.02.patch, > HDFS-13121.03.patch, HDFS-13121.04.patch, test-only.patch > > > Recently, we hit an issue that the DFSClient throws NPE. The case is that, > the app process exceeds the limit of the max open file. In the case, the > libhadoop never throw and exception but return null to the
[jira] [Updated] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kitti Nanasi updated HDFS-13690: Attachment: HDFS-13690.002.patch > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Attachments: HDFS-13690.001.patch, HDFS-13690.002.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522322#comment-16522322 ] genericqa commented on HDFS-13697: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}216m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13697 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929013/HDFS-13697.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 726db4cf2025 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 440140c |
[jira] [Commented] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1656#comment-1656 ] Gabor Bota commented on HDFS-13690: --- Thanks for the patch [~knanasi]! In KMSClientProvider.java[485-489] the catching of the exceptions are starting to get a little overcomplicated imho. {code:java} +} catch (ConnectException ex) { + throw new IOException("Failed to connect to: " + url.toString(), ex); } catch (IOException ex) { if (ex instanceof SocketTimeoutException) { LOG.warn("Failed to connect to {}:{}", url.getHost(), url.getPort()); {code} This could be modified to do a separate catch of {{ConnectException}}, {{SocketTimeoutException}}, so the {{if}} branch with the {{instanceof}} check could be removed. If we really want to keep catching the {{IOException}}, I think we could just add a new else branch to the current {{IOException}} catch with the {{instanceof ConnectException}} check. > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Attachments: HDFS-13690.001.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1650#comment-1650 ] genericqa commented on HDFS-13697: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}203m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13697 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12929005/HDFS-13697.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux c0450e744a00 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build
[jira] [Commented] (HDFS-13661) Ls command with e option fails when the filesystem is not HDFS
[ https://issues.apache.org/jira/browse/HDFS-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522189#comment-16522189 ] Takanobu Asanuma commented on HDFS-13661: - I have sometimes faced below error even if I use HDFS. This is because {{maxEC}} can be 0. {noformat} $ bin/hadoop fs -ls -R -e hdfs:// ... -ls: Conversion = s, Flags = 0 {noformat} The uploaded patch also fixes this bug since it initializes {{maxEC}} with non-0. > Ls command with e option fails when the filesystem is not HDFS > -- > > Key: HDFS-13661 > URL: https://issues.apache.org/jira/browse/HDFS-13661 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, tools >Affects Versions: 3.1.0, 3.0.3 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Attachments: HDFS-13661.1.patch > > > {noformat} > $ hadoop fs -ls -e file:// > Found 10 items > -ls: Fatal internal error > java.lang.NullPointerException > at org.apache.hadoop.fs.shell.Ls.adjustColumnWidths(Ls.java:308) > at org.apache.hadoop.fs.shell.Ls.processPaths(Ls.java:242) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:387) > at org.apache.hadoop.fs.shell.Ls.processPathArgument(Ls.java:226) > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285) > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269) > at > org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120) > at org.apache.hadoop.fs.shell.Command.run(Command.java:176) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:328) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:391) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13698) [PROVIDED Phase 2] Provided ReplicaMap should be LRU with separate lookup from normal Replicas
Ewan Higgs created HDFS-13698: - Summary: [PROVIDED Phase 2] Provided ReplicaMap should be LRU with separate lookup from normal Replicas Key: HDFS-13698 URL: https://issues.apache.org/jira/browse/HDFS-13698 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Ewan Higgs Assignee: Virajith Jalaparti The existing ReplicaMap uses {{ExtendedBlock}} to lookup the replica information. However, Provided replicas should not be in the ReplicaMap; instead they should be lookups in the AliasMap. In order to handle this case, the ReplicaMap lookups should be split into two phases: Lookup by normal ReplicaMap (as is done now) and lookup in AliasMap to see if there is also a Provided replica. The performance of this second provided lookup should be sped up using an LRU cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522126#comment-16522126 ] Zsolt Venczel commented on HDFS-13697: -- Thanks for the review [~gabor.bota] and good catch. I've updated the patch to fix the test related issue. > EDEK decrypt fails due to proxy user being lost because of empty > AccessControllerContext > > > Key: HDFS-13697 > URL: https://issues.apache.org/jira/browse/HDFS-13697 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch > > > While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack > might not have doAs privileged execution call (in the DFSClient for example). > This results in loosing the proxy user from UGI as UGI.getCurrentUser finds > no AccessControllerContext and does a re-login for the login user only. > This can cause the following for example: if we have set up the oozie user to > be entitled to perform actions on behalf of example_user but oozie is > forbidden to decrypt any EDEK (for security reasons), due to the above issue, > example_user entitlements are lost from UGI and the following error is > reported: > {code} > [0] > SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] > JOB[0020905-180313191552532-oozie-oozi-W] > ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting > action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message > [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with > ACL name [encrypted_key]!!] > org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not > authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) > at > org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) > at > org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) > at > org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:286) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User > [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name > [encrypted_key]!! > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) > at > org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataE
[jira] [Updated] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zsolt Venczel updated HDFS-13697: - Attachment: HDFS-13697.02.patch > EDEK decrypt fails due to proxy user being lost because of empty > AccessControllerContext > > > Key: HDFS-13697 > URL: https://issues.apache.org/jira/browse/HDFS-13697 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch > > > While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack > might not have doAs privileged execution call (in the DFSClient for example). > This results in loosing the proxy user from UGI as UGI.getCurrentUser finds > no AccessControllerContext and does a re-login for the login user only. > This can cause the following for example: if we have set up the oozie user to > be entitled to perform actions on behalf of example_user but oozie is > forbidden to decrypt any EDEK (for security reasons), due to the above issue, > example_user entitlements are lost from UGI and the following error is > reported: > {code} > [0] > SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] > JOB[0020905-180313191552532-oozie-oozi-W] > ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting > action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message > [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with > ACL name [encrypted_key]!!] > org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not > authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) > at > org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) > at > org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) > at > org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:286) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User > [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name > [encrypted_key]!! > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) > at > org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1542) > at
[jira] [Commented] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522118#comment-16522118 ] Gabor Bota commented on HDFS-13697: --- Thanks for creating the issue and submitting the patch [~zvenczel]! There are two things I've noticed during the review: * Using string constant field instead of local variable {{kmsAcls}} in TestSecureEncryptionZoneWithKMS#init() {code:java} 234 // set up KMS not to allow oozie service to decrypt encryption keys 235 String kmsAcls = "kms-acls-oozie-blacklist-decrypt.xml"; 236 InputStream is = ThreadUtil.getResourceAsStream(kmsAcls); {code} I think kmsAcls is important enough to be extracted as a constant even if it will be private and used only locally in this test. * The test will pass even if other files are not patched in {{hadoop-hdfs-project/hadoop-hdfs-client}} > EDEK decrypt fails due to proxy user being lost because of empty > AccessControllerContext > > > Key: HDFS-13697 > URL: https://issues.apache.org/jira/browse/HDFS-13697 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13697.01.patch > > > While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack > might not have doAs privileged execution call (in the DFSClient for example). > This results in loosing the proxy user from UGI as UGI.getCurrentUser finds > no AccessControllerContext and does a re-login for the login user only. > This can cause the following for example: if we have set up the oozie user to > be entitled to perform actions on behalf of example_user but oozie is > forbidden to decrypt any EDEK (for security reasons), due to the above issue, > example_user entitlements are lost from UGI and the following error is > reported: > {code} > [0] > SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] > JOB[0020905-180313191552532-oozie-oozi-W] > ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting > action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message > [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with > ACL name [encrypted_key]!!] > org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not > authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) > at > org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) > at > org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) > at > org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:286) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User > [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name > [encrypted_key]!! > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvi
[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches
[ https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522115#comment-16522115 ] Istvan Fajth commented on HDFS-13322: - Hello [~fabbri], sorry for the delay here, I was not able to get to the end of the measurement so far, I had some other things, and some problems with the environment setup over the weekend. I am mostly offline during this week, if you have any time, and have the mood to run the test, that would be great, but otherwise I am also glad to have just your patience on this one until about mid next week. > fuse dfs - uid persists when switching between ticket caches > > > Key: HDFS-13322 > URL: https://issues.apache.org/jira/browse/HDFS-13322 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 2.6.0 > Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed > Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux > >Reporter: Alex Volskiy >Assignee: Istvan Fajth >Priority: Minor > Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, > HDFS-13322.003.patch, testHDFS-13322.sh, test_after_patch.out, > test_before_patch.out > > > The symptoms of this issue are the same as described in HDFS-3608 except the > workaround that was applied (detect changes in UID ticket cache) doesn't > resolve the issue when multiple ticket caches are in use by the same user. > Our use case requires that a job scheduler running as a specific uid obtain > separate kerberos sessions per job and that each of these sessions use a > separate cache. When switching sessions this way, no change is made to the > original ticket cache so the cached filesystem instance doesn't get > regenerated. > > {{$ export KRB5CCNAME=/tmp/krb5cc_session1}} > {{$ kinit user_a@domain}} > {{$ touch /fuse_mount/tmp/testfile1}} > {{$ ls -l /fuse_mount/tmp/testfile1}} > {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}} > {{$ export KRB5CCNAME=/tmp/krb5cc_session2}} > {{$ kinit user_b@domain}} > {{$ touch /fuse_mount/tmp/testfile2}} > {{$ ls -l /fuse_mount/tmp/testfile2}} > {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}} > {{ }}{color:#d04437}*{{** expected owner to be user_b **}}*{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it
[ https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522070#comment-16522070 ] genericqa commented on HDDS-175: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 29s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 31m 15s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 29m 10s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 29m 10s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 29m 10s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s{color} | {color:green} tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} tools in
[jira] [Comment Edited] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522041#comment-16522041 ] Yiqun Lin edited comment on HDFS-13688 at 6/25/18 8:59 AM: --- Hi [~vagarychen], just comparing implementation detail of msync call with that in design doc: {noformat} msync() implementation on the client should keep track of LastSeenId for both ANN and SBN: * If c.LastSeenId.ANN <= c.LastSeenId.SBN then goto ANN and update c.LastSeenId.ANN * Wait until SBN reaches c.LastSeenId.ANN {noformat} Some differences: * LastSeenId isn't tracked for both ANN and SBN. * For the corner case, the client request to ANN, meanwhile the syncTnxId passed in msync call large than {{LastAppliedOrWrittenTxId}} in ANN. Need to throw the exception? Besides, for the following logic: {code:java} +if (!HAServiceState.OBSERVER.toString().equals(namesystem.getHAState())) { + LOG.warn("Calling msync on a non-observer node:" + + namesystem.getHAState()); + return namesystem.getFSImage().getLastAppliedOrWrittenTxId(); +} {code} The condition check should be {{HAServiceState.ACTIVE.toString().equals(namesystem.getHAState()}}? This is mean that only when we request for ANN, then return current txid. For the SBN/Observer Node, we wait until catching up. For the msync call dealing in RBF, currently we don't supported. Why not just pass the msyncExecutor as null there? Actually it isn't real used. {code:java} @@ -252,9 +257,11 @@ public RouterRpcServer(Configuration configuration, Router router, RPC.setProtocolEngine(this.conf, ClientNamenodeProtocolPB.class, ProtobufRpcEngine.class); +this.msyncExecutor = Executors.newFixedThreadPool(10); ClientNamenodeProtocolServerSideTranslatorPB clientProtocolServerTranslator = -new ClientNamenodeProtocolServerSideTranslatorPB(this); +new ClientNamenodeProtocolServerSideTranslatorPB( +this, msyncExecutor); {code} was (Author: linyiqun): Hi [~vagarychen], just comparing implementation detail of msync call with that in design doc: {noformat} msync() implementation on the client should keep track of LastSeenId for both ANN and SBN: * If c.LastSeenId.ANN <= c.LastSeenId.SBN then goto ANN and update c.LastSeenId.ANN * Wait until SBN reaches c.LastSeenId.ANN {noformat} Some differences: * LastSeenId isn't tracked for both ANN and SBN. * For the corner case, the client request to ANN, meanwhile the syncTnxId passed in msync call large than {{LastAppliedOrWrittenTxId}} in ANN. Current processing logic is different with designed way. Besides, for the following logic: {code:java} +if (!HAServiceState.OBSERVER.toString().equals(namesystem.getHAState())) { + LOG.warn("Calling msync on a non-observer node:" + + namesystem.getHAState()); + return namesystem.getFSImage().getLastAppliedOrWrittenTxId(); +} {code} The condition check should be {{HAServiceState.ACTIVE.toString().equals(namesystem.getHAState()}}? This is mean that only when we request for ANN, then return current txid. For the SBN/Observer Node, we wait until catching up. For the msync call dealing in RBF, currently we don't supported. Why not just pass the msyncExecutor as null there? Actually it isn't real used. {code:java} @@ -252,9 +257,11 @@ public RouterRpcServer(Configuration configuration, Router router, RPC.setProtocolEngine(this.conf, ClientNamenodeProtocolPB.class, ProtobufRpcEngine.class); +this.msyncExecutor = Executors.newFixedThreadPool(10); ClientNamenodeProtocolServerSideTranslatorPB clientProtocolServerTranslator = -new ClientNamenodeProtocolServerSideTranslatorPB(this); +new ClientNamenodeProtocolServerSideTranslatorPB( +this, msyncExecutor); {code} > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zsolt Venczel updated HDFS-13697: - Attachment: HDFS-13697.01.patch Status: Patch Available (was: In Progress) > EDEK decrypt fails due to proxy user being lost because of empty > AccessControllerContext > > > Key: HDFS-13697 > URL: https://issues.apache.org/jira/browse/HDFS-13697 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13697.01.patch > > > While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack > might not have doAs privileged execution call (in the DFSClient for example). > This results in loosing the proxy user from UGI as UGI.getCurrentUser finds > no AccessControllerContext and does a re-login for the login user only. > This can cause the following for example: if we have set up the oozie user to > be entitled to perform actions on behalf of example_user but oozie is > forbidden to decrypt any EDEK (for security reasons), due to the above issue, > example_user entitlements are lost from UGI and the following error is > reported: > {code} > [0] > SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] > JOB[0020905-180313191552532-oozie-oozi-W] > ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting > action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message > [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with > ACL name [encrypted_key]!!] > org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not > authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) > at > org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) > at > org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) > at > org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:286) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User > [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name > [encrypted_key]!! > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) > at > org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStr
[jira] [Work started] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
[ https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13697 started by Zsolt Venczel. > EDEK decrypt fails due to proxy user being lost because of empty > AccessControllerContext > > > Key: HDFS-13697 > URL: https://issues.apache.org/jira/browse/HDFS-13697 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > > While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack > might not have doAs privileged execution call (in the DFSClient for example). > This results in loosing the proxy user from UGI as UGI.getCurrentUser finds > no AccessControllerContext and does a re-login for the login user only. > This can cause the following for example: if we have set up the oozie user to > be entitled to perform actions on behalf of example_user but oozie is > forbidden to decrypt any EDEK (for security reasons), due to the above issue, > example_user entitlements are lost from UGI and the following error is > reported: > {code} > [0] > SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] > JOB[0020905-180313191552532-oozie-oozi-W] > ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting > action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message > [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with > ACL name [encrypted_key]!!] > org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not > authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) > at > org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) > at > org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) > at > org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:286) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User > [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name > [encrypted_key]!! > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94) > at > org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) > at > org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1542) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1527
[jira] [Commented] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522041#comment-16522041 ] Yiqun Lin commented on HDFS-13688: -- Hi [~vagarychen], just comparing implementation detail of msync call with that in design doc: {noformat} msync() implementation on the client should keep track of LastSeenId for both ANN and SBN: * If c.LastSeenId.ANN <= c.LastSeenId.SBN then goto ANN and update c.LastSeenId.ANN * Wait until SBN reaches c.LastSeenId.ANN {noformat} Some differences: * LastSeenId isn't tracked for both ANN and SBN. * For the corner case, the client request to ANN, meanwhile the syncTnxId passed in msync call large than {{LastAppliedOrWrittenTxId}} in ANN. Current processing logic is different with designed way. Besides, for the following logic: {code:java} +if (!HAServiceState.OBSERVER.toString().equals(namesystem.getHAState())) { + LOG.warn("Calling msync on a non-observer node:" + + namesystem.getHAState()); + return namesystem.getFSImage().getLastAppliedOrWrittenTxId(); +} {code} The condition check should be {{HAServiceState.ACTIVE.toString().equals(namesystem.getHAState()}}? This is mean that only when we request for ANN, then return current txid. For the SBN/Observer Node, we wait until catching up. For the msync call dealing in RBF, currently we don't supported. Why not just pass the msyncExecutor as null there? Actually it isn't real used. {code:java} @@ -252,9 +257,11 @@ public RouterRpcServer(Configuration configuration, Router router, RPC.setProtocolEngine(this.conf, ClientNamenodeProtocolPB.class, ProtobufRpcEngine.class); +this.msyncExecutor = Executors.newFixedThreadPool(10); ClientNamenodeProtocolServerSideTranslatorPB clientProtocolServerTranslator = -new ClientNamenodeProtocolServerSideTranslatorPB(this); +new ClientNamenodeProtocolServerSideTranslatorPB( +this, msyncExecutor); {code} > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13697) EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext
Zsolt Venczel created HDFS-13697: Summary: EDEK decrypt fails due to proxy user being lost because of empty AccessControllerContext Key: HDFS-13697 URL: https://issues.apache.org/jira/browse/HDFS-13697 Project: Hadoop HDFS Issue Type: Bug Reporter: Zsolt Venczel Assignee: Zsolt Venczel While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack might not have doAs privileged execution call (in the DFSClient for example). This results in loosing the proxy user from UGI as UGI.getCurrentUser finds no AccessControllerContext and does a re-login for the login user only. This can cause the following for example: if we have set up the oozie user to be entitled to perform actions on behalf of example_user but oozie is forbidden to decrypt any EDEK (for security reasons), due to the above issue, example_user entitlements are lost from UGI and the following error is reported: {code} [0] SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] JOB[0020905-180313191552532-oozie-oozi-W] ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!] org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463) at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441) at org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523) at org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199) at org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) at org.apache.oozie.command.XCommand.call(XCommand.java:286) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!! at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:607) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:565) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:832) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440) at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1542) at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1527) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:408) at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:401) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:401) at org.apache.hadoop.hdfs.DistributedFileSystem.cr