[jira] [Commented] (HDFS-14095) EC: Track Erasure Coding commands in DFS statistics
[ https://issues.apache.org/jira/browse/HDFS-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703641#comment-16703641 ] Brahma Reddy Battula commented on HDFS-14095: - [~ayushtkn] thanks for updating the patch. latest patch lgtm. will commit shortly.. Test failure is unrelated this, I have executed locally it's passing.Please check the following for same. {noformat} [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.385 s - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts [INFO] [INFO] Results: [INFO] [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-antrun-plugin:1.7:run (hdfs-test-bats-driver) @ hadoop-hdfs --- [INFO] Executing tasks{noformat} > EC: Track Erasure Coding commands in DFS statistics > --- > > Key: HDFS-14095 > URL: https://issues.apache.org/jira/browse/HDFS-14095 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14095-01.patch, HDFS-14095-02.patch, > HDFS-14095-03.patch, HDFS-14095-04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14095) EC: Track Erasure Coding commands in DFS statistics
[ https://issues.apache.org/jira/browse/HDFS-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702639#comment-16702639 ] Brahma Reddy Battula commented on HDFS-14095: - [~ayushtkn] thanks for updating the patch..Can you address the check-style issue..? > EC: Track Erasure Coding commands in DFS statistics > --- > > Key: HDFS-14095 > URL: https://issues.apache.org/jira/browse/HDFS-14095 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14095-01.patch, HDFS-14095-02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14095) EC: Track Erasure Coding commands in DFS statistics
[ https://issues.apache.org/jira/browse/HDFS-14095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701885#comment-16701885 ] Brahma Reddy Battula commented on HDFS-14095: - Thanks [~ayushtkn] for reporting and working on this. Patch lgtm apart from the following. It will be good if you handle all the missing storagestatics metrics as This storage statistics tracks how many times each DFS operation was issued.May be seperate jira..? * Fix the following typo. ** 55 DISABLE_EC_POLICY("op_disible_ec_policy"), * Take care alphabetical order ** 101 UNSET_STORAGE_POLICY("op_unset_storage_policy"), 102 UNSET_EC_POLICY("op_unset_ec_policy"); > EC: Track Erasure Coding commands in DFS statistics > --- > > Key: HDFS-14095 > URL: https://issues.apache.org/jira/browse/HDFS-14095 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14095-01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14079) RBF: RouterAdmin should have failover concept for router
[ https://issues.apache.org/jira/browse/HDFS-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701856#comment-16701856 ] Brahma Reddy Battula commented on HDFS-14079: - [~surendrasingh] thanks for reporting and working on this. * As [~linyiqun] pointed consistency will be problem.we might need to retry till state store refreshed ( which is by default 60 Sec).looks you used TRY_ONCE_THEN_FAIL. Please add testcases for that. Ideally admin operations will be rare(add,ls,safemode..),May be because of this it was not implemented.? * @ProtocolInfo(protocolName = HdfsConstants.CLIENT_NAMENODE_PROTOCOL_NAME, Not sure whether this intened.[~elgoiri] can you confirm same.Even this can be done in seperate jira. * IMO, Instead of exposing config for admin address(which needs additional validation),RouterStateManager has admin address like below.SO we can use that? {code:java} List cachedRecords = router.getRouterStateManager().getCachedRecords(); String adminAddress = routerState.getAdminAddress(); {code} > RBF: RouterAdmin should have failover concept for router > > > Key: HDFS-14079 > URL: https://issues.apache.org/jira/browse/HDFS-14079 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-14079-HDFS-13891.01.patch, > HDFS-14079-HDFS-13891.02.patch > > > Currenlty {{RouterAdmin}} connect with only one router for admin operation, > if the configured router is down then router admin command is failing. It > should allow to configure all the router admin address. > {code} > // Initialize RouterClient > try { > String address = getConf().getTrimmed( > RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY, > RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_DEFAULT); > InetSocketAddress routerSocket = NetUtils.createSocketAddr(address); > client = new RouterClient(routerSocket, getConf()); > } catch (RPC.VersionMismatch v) { > System.err.println( > "Version mismatch between client and server... command aborted"); > return exitCode; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13816) dfs.getQuotaUsage() throws NPE on non-existent dir instead of FileNotFoundException
[ https://issues.apache.org/jira/browse/HDFS-13816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16698548#comment-16698548 ] Brahma Reddy Battula commented on HDFS-13816: - +1,lgtm. > dfs.getQuotaUsage() throws NPE on non-existent dir instead of > FileNotFoundException > --- > > Key: HDFS-13816 > URL: https://issues.apache.org/jira/browse/HDFS-13816 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Attachments: HDFS-13816-01.patch, HDFS-13816-02.patch > > > {{dfs.getQuotaUsage()}} on non-existent path should throw > FileNotFoundException. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getQuotaUsageInt(FSDirStatAndListingOp.java:573) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getQuotaUsage(FSDirStatAndListingOp.java:554) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getQuotaUsage(FSNamesystem.java:3221) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getQuotaUsage(NameNodeRpcServer.java:1404) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getQuotaUsage(ClientNamenodeProtocolServerSideTranslatorPB.java:1861) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-14089: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-13891 Status: Resolved (was: Patch Available) Committed to branch. [~RANith] thanks for contribution and [~elgoiri] thanks for additional review. > RBF: Failed to specify server's Kerberos pricipal name in > NamenodeHeartbeatService > --- > > Key: HDFS-14089 > URL: https://issues.apache.org/jira/browse/HDFS-14089 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Minor > Fix For: HDFS-13891 > > Attachments: HDFS-14089-HDFS-13891.003.patch, > HDFS-14089-HDFS-13891.004.patch, HDFS-14089.002.patch, HDFS-14089.patch > > > Currently "HADOOP_SECURITY_SERVICE_USER_NAME_KEY" need to configure manually. > Let's do how DFSZKFailoverController, DFSHAAdmin setting this conf to > NamenodeHeartbeatService So that we no need to configure manually. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-14089: Summary: RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService (was: RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService) > RBF: Failed to specify server's Kerberos pricipal name in > NamenodeHeartbeatService > --- > > Key: HDFS-14089 > URL: https://issues.apache.org/jira/browse/HDFS-14089 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Minor > Attachments: HDFS-14089-HDFS-13891.003.patch, > HDFS-14089-HDFS-13891.004.patch, HDFS-14089.002.patch, HDFS-14089.patch > > > Currently "HADOOP_SECURITY_SERVICE_USER_NAME_KEY" need to configure manually. > Let's do how DFSZKFailoverController, DFSHAAdmin setting this conf to > NamenodeHeartbeatService So that we no need to configure manually. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-14089: Description: Currently "HADOOP_SECURITY_SERVICE_USER_NAME_KEY" need to configure manually. Let's do how DFSZKFailoverController, DFSHAAdmin setting this conf to NamenodeHeartbeatService So that we no need to configure manually. (was: DFSZKFailoverController, DFSHAAdmin setting the conf for "HADOOP_SECURITY_SERVICE_USER_NAME_KEY". Need to add the configuration for NamenodeHeartbeatService as well.) > RBF: Failed to specify server's Kerberos pricipal name in > NamenodeHeartbeatService > -- > > Key: HDFS-14089 > URL: https://issues.apache.org/jira/browse/HDFS-14089 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Minor > Attachments: HDFS-14089-HDFS-13891.003.patch, > HDFS-14089-HDFS-13891.004.patch, HDFS-14089.002.patch, HDFS-14089.patch > > > Currently "HADOOP_SECURITY_SERVICE_USER_NAME_KEY" need to configure manually. > Let's do how DFSZKFailoverController, DFSHAAdmin setting this conf to > NamenodeHeartbeatService So that we no need to configure manually. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs
[ https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695129#comment-16695129 ] Brahma Reddy Battula commented on HDFS-13776: - Going to Commit in HDFS-13891 branch. > RBF: Add Storage policies related ClientProtocol APIs > - > > Key: HDFS-13776 > URL: https://issues.apache.org/jira/browse/HDFS-13776 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Attachments: HDFS-13776-000.patch, HDFS-13776-001.patch, > HDFS-13776-002.patch, HDFS-13776-003.patch, HDFS-13776-004.patch, > HDFS-13776-005.patch, HDFS-13776-006.patch > > > Currently unsetStoragePolicy and getStoragePolicy are not implemented in > RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs
[ https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695143#comment-16695143 ] Brahma Reddy Battula commented on HDFS-13776: - * [^HDFS-13776-HDFS-13891-006.patch] is committed patch. > RBF: Add Storage policies related ClientProtocol APIs > - > > Key: HDFS-13776 > URL: https://issues.apache.org/jira/browse/HDFS-13776 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-13776-000.patch, HDFS-13776-001.patch, > HDFS-13776-002.patch, HDFS-13776-003.patch, HDFS-13776-004.patch, > HDFS-13776-005.patch, HDFS-13776-006.patch, HDFS-13776-HDFS-13891-006.patch > > > Currently unsetStoragePolicy and getStoragePolicy are not implemented in > RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs
[ https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13776: Attachment: HDFS-13776-HDFS-13891-006.patch > RBF: Add Storage policies related ClientProtocol APIs > - > > Key: HDFS-13776 > URL: https://issues.apache.org/jira/browse/HDFS-13776 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-13776-000.patch, HDFS-13776-001.patch, > HDFS-13776-002.patch, HDFS-13776-003.patch, HDFS-13776-004.patch, > HDFS-13776-005.patch, HDFS-13776-006.patch, HDFS-13776-HDFS-13891-006.patch > > > Currently unsetStoragePolicy and getStoragePolicy are not implemented in > RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs
[ https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13776: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-13891 Status: Resolved (was: Patch Available) Committed to HDFS-13891 branch. [~dibyendu_hadoop] thanks for contribution. [~elgoiri] thanks for additional great review. FYI. There was a compilation error becuase of DN_REPORT_CACHE_EXPIRE which is moved to RBFConfigkeys (HDFS-13852). I resolved same while committing.will upload same. > RBF: Add Storage policies related ClientProtocol APIs > - > > Key: HDFS-13776 > URL: https://issues.apache.org/jira/browse/HDFS-13776 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-13776-000.patch, HDFS-13776-001.patch, > HDFS-13776-002.patch, HDFS-13776-003.patch, HDFS-13776-004.patch, > HDFS-13776-005.patch, HDFS-13776-006.patch, HDFS-13776-HDFS-13891-006.patch > > > Currently unsetStoragePolicy and getStoragePolicy are not implemented in > RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy
[ https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-14064: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.1 3.3.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.2. [~ayushtkn] thanks for your contribution. and thanks others for additional review. > WEBHDFS: Support Enable/Disable EC Policy > - > > Key: HDFS-14064 > URL: https://issues.apache.org/jira/browse/HDFS-14064 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.3.0, 3.2.1 > > Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, > HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, > HDFS-14064-05.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695110#comment-16695110 ] Brahma Reddy Battula commented on HDFS-14089: - No,we covered this.. We manually configured in core-site.xml. > RBF: Failed to specify server's Kerberos pricipal name in > NamenodeHeartbeatService > -- > > Key: HDFS-14089 > URL: https://issues.apache.org/jira/browse/HDFS-14089 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Minor > Attachments: HDFS-14089-HDFS-13891.003.patch, HDFS-14089.002.patch, > HDFS-14089.patch > > > DFSZKFailoverController, DFSHAAdmin setting the conf for > "HADOOP_SECURITY_SERVICE_USER_NAME_KEY". Need to add the configuration for > NamenodeHeartbeatService as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy
[ https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695081#comment-16695081 ] Brahma Reddy Battula commented on HDFS-14064: - +1, latest patch. [~ayushtkn] thanks for updating the patch.. will commit shortly. > WEBHDFS: Support Enable/Disable EC Policy > - > > Key: HDFS-14064 > URL: https://issues.apache.org/jira/browse/HDFS-14064 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, > HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, > HDFS-14064-05.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695078#comment-16695078 ] Brahma Reddy Battula commented on HDFS-14089: - bq.We should have a unit test to cover this though. This Jira will avoid the manual configure of "HADOOP_SECURITY_SERVICE_USER_NAME_KEY". This we set in SecurityConfUtil.java. So, Removal of this conf from SecurityConfUtil.java should be enough to verify this. [~RANith] can you please remove the unused imports from the SecurityConfUtil.java. Apart from this patch lgtm. > RBF: Failed to specify server's Kerberos pricipal name in > NamenodeHeartbeatService > -- > > Key: HDFS-14089 > URL: https://issues.apache.org/jira/browse/HDFS-14089 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Minor > Attachments: HDFS-14089-HDFS-13891.003.patch, HDFS-14089.002.patch, > HDFS-14089.patch > > > DFSZKFailoverController, DFSHAAdmin setting the conf for > "HADOOP_SECURITY_SERVICE_USER_NAME_KEY". Need to add the configuration for > NamenodeHeartbeatService as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14091) RBF: File Read and Writing is failing when security is enabled.
[ https://issues.apache.org/jira/browse/HDFS-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694127#comment-16694127 ] Brahma Reddy Battula edited comment on HDFS-14091 at 11/21/18 3:27 AM: --- [~RANith] thanks for reporting. I planned to do this under HDFS-13655,let do there and this will not blocker since "dfs.encrypt.data.transfer" default value will be "false".( these will be enabled for data encryption) Coming to Patch,Encryption key based on the BPID so I feel, we need to get from all the namespaces and return to requessted namespace. was (Author: brahmareddy): [~RANith] thanks for reporting. I planned to do this under HDFS-13655. and this will not blocker since "dfs.encrypt.data.transfer" default value will be "false".( these will be enabled for data encryption) Coming to Patch,Encryption key based on the BPID so I feel, we need to get from all the namespaces and return to requessted namespace. > RBF: File Read and Writing is failing when security is enabled. > --- > > Key: HDFS-14091 > URL: https://issues.apache.org/jira/browse/HDFS-14091 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-13532 >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Blocker > Attachments: HDFS-14091.001.patch > > > 2018-11-20 14:20:53,127 INFO hdfs.DataStreamer: Exception in > createBlockOutputStream blk_1073741872_1048 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "getDataEncryptionKey" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:436) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDataEncryptionKey(RouterRpcServer.java:1965) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolServerSideTranslatorPB.java:1214) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520) > at org.apache.hadoop.ipc.Client.call(Client.java:1466) > at org.apache.hadoop.ipc.Client.call(Client.java:1376) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy11.getDataEncryptionKey(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolTranslatorPB.java:1133) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) > at com.sun.proxy.$Proxy12.getDataEncryptionKey(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1824) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:214) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183) > at > org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1795) > at > org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718) -- This message was sent
[jira] [Commented] (HDFS-14091) RBF: File Read and Writing is failing when security is enabled.
[ https://issues.apache.org/jira/browse/HDFS-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694127#comment-16694127 ] Brahma Reddy Battula commented on HDFS-14091: - [~RANith] thanks for reporting. I planned to do this under HDFS-13655. and this will not blocker since "dfs.encrypt.data.transfer" default value will be "false".( these will be enabled for data encryption) Coming to Patch,Encryption key based on the BPID so I feel, we need to get from all the namespaces and return to requessted namespace. > RBF: File Read and Writing is failing when security is enabled. > --- > > Key: HDFS-14091 > URL: https://issues.apache.org/jira/browse/HDFS-14091 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-13532 >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Blocker > Attachments: HDFS-14091.001.patch > > > 2018-11-20 14:20:53,127 INFO hdfs.DataStreamer: Exception in > createBlockOutputStream blk_1073741872_1048 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "getDataEncryptionKey" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:436) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDataEncryptionKey(RouterRpcServer.java:1965) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolServerSideTranslatorPB.java:1214) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520) > at org.apache.hadoop.ipc.Client.call(Client.java:1466) > at org.apache.hadoop.ipc.Client.call(Client.java:1376) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy11.getDataEncryptionKey(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolTranslatorPB.java:1133) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) > at com.sun.proxy.$Proxy12.getDataEncryptionKey(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1824) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:214) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183) > at > org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1795) > at > org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy
[ https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693604#comment-16693604 ] Brahma Reddy Battula commented on HDFS-14064: - [~ayushtkn] thanks for working on this. Latest patch lgtm apart from following * can you update webhdfs doc for same * can you use "try with resources" for minidfscluster * As you are verifying with DFS, it's better to comment to same.(Even you can combine to single testcase). * fix check style issues. > WEBHDFS: Support Enable/Disable EC Policy > - > > Key: HDFS-14064 > URL: https://issues.apache.org/jira/browse/HDFS-14064 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, > HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)
[ https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693348#comment-16693348 ] Brahma Reddy Battula commented on HDFS-13972: - [~crh] thanks for working on this. The approach looks good to me apart from the following. {quote}Also as discussed in previous threads, we can do optimizations to re-use namenode code, but have kept it simple for now. {quote} Yes, we can optimize. JspHelper.java and UserProvider.java also will be loaded classpath which is similar(except following) to RouterJSPHelper(RJH) and RouterUserProvider(RUP) . You might have get success RJH and RUP loaded first in Classpath in your test. * We can have one interface for verifytoken(...) which can be implmented by both namenode and Router(like below) So that we no need to have RJH and RUP. {code:java} +package org.apache.hadoop.hdfs.server.common; + +import java.io.IOException; + +import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier; + +public interface TokenVerifier { + void verifyToken(T t, byte[] password) throws IOException; +} --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java @@ -176,10 +176,10 @@ private static UserGroupInformation getTokenUGI(ServletContext context, DelegationTokenIdentifier id = new DelegationTokenIdentifier(); id.readFields(in); if (context != null) { - final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context); + final TokenVerifier nn = NameNodeHttpServer.getTokenVerifierFromContext(context); if (nn != null) { // Verify the token. - nn.getNamesystem().verifyToken(id, token.getPassword()); + nn.verifyToken(id, token.getPassword()); } --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java @@ -47,6 +47,7 @@ import org.apache.hadoop.hdfs.protocol.ClientProtocol; import org.apache.hadoop.hdfs.protocol.HdfsConstants; import org.apache.hadoop.hdfs.protocol.HdfsConstants.StoragePolicySatisfierMode; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap; import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer; import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager; @@ -55,6 +56,7 @@ import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption; import org.apache.hadoop.hdfs.server.common.MetricsLoggerTask; import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory; +import org.apache.hadoop.hdfs.server.common.TokenVerifier; import org.apache.hadoop.hdfs.server.namenode.ha.ActiveState; import org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby; import org.apache.hadoop.hdfs.server.namenode.ha.HAContext; @@ -208,7 +210,7 @@ **/ @InterfaceAudience.Private public class NameNode extends ReconfigurableBase implements - NameNodeStatusMXBean { + NameNodeStatusMXBean, TokenVerifier { static{ HdfsConfiguration.init(); } @@ -2202,4 +2204,10 @@ String reconfigureSPSModeEvent(String newVal, String property) protected Configuration getNewConf() { return new HdfsConfiguration(); } + + @Override + public void verifyToken(DelegationTokenIdentifier tokenId, byte[] password) + throws IOException { + namesystem.verifyToken(tokenId, password); + } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java index 1bc43b896ae..e199a10bd5b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java @@ -36,7 +36,9 @@ import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; import org.apache.hadoop.hdfs.server.common.JspHelper; +import org.apache.hadoop.hdfs.server.common.TokenVerifier; import org.apache.hadoop.hdfs.server.namenode.startupprogress.StartupProgress; import org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods; import org.apache.hadoop.hdfs.web.AuthFilter; @@ -308,6 +310,10 @@ public static NameNode getNameNodeFromContext(ServletContext context) { return (NameNode)context.getAttribute(NAMENODE_ATTRIBUTE_KEY); } + public static TokenVerifier getTokenVerifierFromContext(ServletContext
[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)
[ https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686508#comment-16686508 ] Brahma Reddy Battula commented on HDFS-13972: - bq.Could you help rebase HDFS-13891 branch with trunk. Done.Please pull the branch before you work.You might get conflicts , AS HDFS-13834 was missed(Sorry for this),I pushed this commit. > RBF: Support for Delegation Token (WebHDFS) > --- > > Key: HDFS-13972 > URL: https://issues.apache.org/jira/browse/HDFS-13972 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > > HDFS Router should support issuing HDFS delegation tokens through WebHDFS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.
[ https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684829#comment-16684829 ] Brahma Reddy Battula commented on HDFS-13852: - FYI. HDFS-13891 is rebased. So that it's having HADOOP-15916 now. > RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured > in RBFConfigKeys. > - > > Key: HDFS-13852 > URL: https://issues.apache.org/jira/browse/HDFS-13852 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Affects Versions: 3.1.0, 2.9.1, 3.0.1 >Reporter: yanghuafeng >Assignee: yanghuafeng >Priority: Major > Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, > HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch > > > In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' > periodically. And we can set the dfs.federation.router.dn-report.time-out and > dfs.federation.router.dn-report.cache-expire to avoid time out. But when we > start the router, the FederationMetrics will also invoke the method to get > node usage. If time out error happened, we cannot adjust the parameter > time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics > should be the same. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility
[ https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-14070: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.1 3.3.0 3.1.2 3.0.4 Status: Resolved (was: Patch Available) Committed to trunk,branch-3.2,branch-3.1 and branch-3.0. [~crh] thanks for contribution. [~elgoiri] thanks for additional review. > Refactor NameNodeWebHdfsMethods to allow better extensibility > - > > Key: HDFS-14070 > URL: https://issues.apache.org/jira/browse/HDFS-14070 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1 > > Attachments: HDFS-14070.001.patch > > > Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, > cancelDelegationToken and generateDelegationTokens should be extensible. > Router can then have its own implementation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility
[ https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684764#comment-16684764 ] Brahma Reddy Battula commented on HDFS-14070: - bq.Router will extend the new methods and have its own implementation w.r.t webhdfs token management. Yes,this refactor required.Thanks for reporting. +1 on HDFS-14070.001.patch. Will commit. > Refactor NameNodeWebHdfsMethods to allow better extensibility > - > > Key: HDFS-14070 > URL: https://issues.apache.org/jira/browse/HDFS-14070 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Attachments: HDFS-14070.001.patch > > > Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, > cancelDelegationToken and generateDelegationTokens should be extensible. > Router can then have its own implementation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures
[ https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684740#comment-16684740 ] Brahma Reddy Battula commented on HDFS-14065: - Linking the broken jira. Nice Catch [~ayushtkn].. > Failed Storage Locations shows nothing in the Datanode Volume Failures > -- > > Key: HDFS-14065 > URL: https://issues.apache.org/jira/browse/HDFS-14065 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1 > > Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch > > > The failed storage locations in the *DataNode Volume Failure* UI shows > nothing. Despite having failed Storages. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13998) ECAdmin NPE with -setPolicy -replicate
[ https://issues.apache.org/jira/browse/HDFS-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683359#comment-16683359 ] Brahma Reddy Battula commented on HDFS-13998: - IMHO, HDFS-13732 change might not require..? As admin will be aware of configured policy and these are admin commands. Adding RPC can mislead For concurrent calls and any error while getting the policy after setting. and Extra overhead as [~ayushtkn] mentioned. Audit log ( for debugging) and RPC call If you all agree, we should revert HDFS-13732 before we ship 3.2 release and Follow up for next release..? If we really required why can't we do through getserverdefaults()(by adding EC field there). > ECAdmin NPE with -setPolicy -replicate > -- > > Key: HDFS-13998 > URL: https://issues.apache.org/jira/browse/HDFS-13998 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.2.0, 3.1.2 >Reporter: Xiao Chen >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13998.01.patch, HDFS-13998.02.patch, > HDFS-13998.03.patch > > > HDFS-13732 tried to improve the output of the console tool. But we missed the > fact that for replication, {{getErasureCodingPolicy}} would return null. > This jira is to fix it in ECAdmin, and add a unit test. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token (RPC)
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680148#comment-16680148 ] Brahma Reddy Battula commented on HDFS-13358: - [~crh] any update on this..? > RBF: Support for Delegation Token (RPC) > --- > > Key: HDFS-13358 > URL: https://issues.apache.org/jira/browse/HDFS-13358 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: CR Hota >Priority: Major > Attachments: RBF_ Delegation token design.pdf > > > HDFS Router should support issuing / managing HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678487#comment-16678487 ] Brahma Reddy Battula commented on HDFS-12284: - As per discussion in HDFS-13532,we are going to maintain single branch for RBF.I committed this to HDFS-13891 branch. > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-13891, HDFS-13532 > > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284-HDFS-13532.addendum.patch, > HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, > HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677551#comment-16677551 ] Brahma Reddy Battula commented on HDFS-13532: - Thanks All.I just committed kerboes patch (HDFS-12284) to HDFS-13891 branch. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, RBF_ > Security delegation token thoughts_updated_3.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13891) Über-jira: RBF stabilisation phase I
[ https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13891: Description: RBF shipped in 3.0+ and 2.9.. now its out various corner cases, scale and error handling issues are surfacing. And we are targeting security feaiure (HDFS-13532) also. this umbrella to fix all those issues and support missing protocols(HDFS-13655) before next 3.3 release. was: RBF shipped in 3.0+ and 2.9.. now its out various corner cases, scale and error handling issues are surfacing. this umbrella to fix all those issues and support missing protocols(HDFS-13655) before next 3.3 release. > Über-jira: RBF stabilisation phase I > -- > > Key: HDFS-13891 > URL: https://issues.apache.org/jira/browse/HDFS-13891 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Priority: Major > Labels: RBF > > RBF shipped in 3.0+ and 2.9.. > now its out various corner cases, scale and error handling issues are > surfacing. > And we are targeting security feaiure (HDFS-13532) also. > this umbrella to fix all those issues and support missing > protocols(HDFS-13655) before next 3.3 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-12284: Fix Version/s: HDFS-13891 > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-13891, HDFS-13532 > > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284-HDFS-13532.addendum.patch, > HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, > HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677215#comment-16677215 ] Brahma Reddy Battula commented on HDFS-13532: - AFAIK, we are not targeting any new feature apart from the security in RBF.HDFS-13891 is for stabilisation(HDFS-13891) where we can include this security feature also. and Yes,Core changes will be handled separately as they might not from RBF module. [~elgoiri] Please correct me if I am wrong. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, RBF_ > Security delegation token thoughts_updated_3.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677177#comment-16677177 ] Brahma Reddy Battula commented on HDFS-13532: - Hi All, It's better to have one branch for one feature.Unfortunately there are two branches(HDFS-13891 and HDFS-13532) in RBF now. Maintaince will be costly and voting also need to do twice..Hence I am proposing better to have one branch. AS there are some commits went into HDFS-13891,I feel,all security ( which target to HDFS-13532) can be committed to HDFS -13891. any thoughts..? > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, RBF_ > Security delegation token thoughts_updated_3.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-12284: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-13532 Status: Resolved (was: Patch Available) > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-13532 > > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284-HDFS-13532.addendum.patch, > HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, > HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677160#comment-16677160 ] Brahma Reddy Battula commented on HDFS-12284: - +1 on addendum. Committed to branch. [~elgoiri] thanks for your great contribution.once again thanks to all who are involved . As discussed offline,do prepare the testreport. > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284-HDFS-13532.addendum.patch, > HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, > HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677141#comment-16677141 ] Brahma Reddy Battula commented on HDFS-12284: - Committed HDFS-12284-HDFS-13532.013.patch.Thanks for all great discussions here. [~elgoiri] *HADOOP-15832* changed *org.bouncycastle* from *bcprov-jdk16* to *bcprov-jdk15on (*Appeared ** After rebase the branch hence yetus also didn't catch.) can you please upload one addendum patch for same. > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, > HDFS-12284.002.patch, HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675607#comment-16675607 ] Brahma Reddy Battula commented on HDFS-12284: - [~elgoiri] ,looks you marked this blocked by HDFS-14051. This will not blocked ,If we configure "DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY",then it will not use deafult(namenode) keytab. Please have look at following code snippet for same.. May we can have sanity check for this in router httpserver. IMO, we can gohead committing this Jira with latest uploaded patch. {code:java} /** * Get SPNEGO keytab Key from configuration * * @param conf Configuration * @param defaultKey default key to be used for config lookup * @return DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY if the key is not empty * else return defaultKey */ public static String getSpnegoKeytabKey(Configuration conf, String defaultKey) { String value = conf.get(DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY); return (value == null || value.isEmpty()) ? defaultKey : DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY; }{code} > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, > HDFS-12284.002.patch, HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672824#comment-16672824 ] Brahma Reddy Battula commented on HDFS-14024: - Reabsed now.Please have look. > RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService > - > > Key: HDFS-14024 > URL: https://issues.apache.org/jira/browse/HDFS-14024 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14024-HDFS-13891.0.patch, HDFS-14024.0.patch > > > Routers may be proxying for a downstream name node that is NOT migrated to > understand "ProvidedCapacityTotal". updateJMXParameters method in > NamenodeHeartbeatService should handle this without breaking. > > {code:java} > jsonObject.getLong("MissingBlocks"), > jsonObject.getLong("PendingReplicationBlocks"), > jsonObject.getLong("UnderReplicatedBlocks"), > jsonObject.getLong("PendingDeletionBlocks"), > jsonObject.getLong("ProvidedCapacityTotal")); > {code} > One way to do this is create a json wrapper while gives back some default if > json node is not found. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13845) RBF: The default MountTableResolver should fail resolving multi-destination paths
[ https://issues.apache.org/jira/browse/HDFS-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13845: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-13891 Status: Resolved (was: Patch Available) Committed to HDFS-13891 branch. [~hfyang20071] thanks for contribution.and thanks to [~elgoiri] for additional review. > RBF: The default MountTableResolver should fail resolving multi-destination > paths > - > > Key: HDFS-13845 > URL: https://issues.apache.org/jira/browse/HDFS-13845 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, hdfs >Affects Versions: 3.0.0, 3.1.0, 2.9.1 >Reporter: yanghuafeng >Assignee: yanghuafeng >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-13845.001.patch, HDFS-13845.002.patch, > HDFS-13845.003.patch, HDFS-13845.004.patch, HDFS-13845.005.patch > > > When we use the default MountTableResolver to resolve the path, we cannot get > the destination paths for the default DestinationOrder.HASH. > {code:java} > // Some comments here > private static PathLocation buildLocation( > .. > List locations = new LinkedList<>(); > for (RemoteLocation oneDst : entry.getDestinations()) { > String nsId = oneDst.getNameserviceId(); > String dest = oneDst.getDest(); > String newPath = dest; > if (!newPath.endsWith(Path.SEPARATOR) && !remainingPath.isEmpty()) { > newPath += Path.SEPARATOR; > } > newPath += remainingPath; > RemoteLocation remoteLocation = new RemoteLocation(nsId, newPath, path); > locations.add(remoteLocation); > } > DestinationOrder order = entry.getDestOrder(); > return new PathLocation(srcPath, locations, order); > } > {code} > The default order will be hash, but the HashFirstResolver will not be invoked > to order the location. > It is ambiguous for the MountTableResolver that we will see the HASH order in > the web ui for multi-destinations path but we cannot get the result. > In my opinion, the MountTableResolver will be a simple resolver to implement > 1 to 1 not including the 1 to n destinations. So we should check the > buildLocation. If the entry has multi destinations, we should reject it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667768#comment-16667768 ] Brahma Reddy Battula commented on HDFS-12284: - [~elgoiri] thanks for updating the patch. Latest patch lgtm. Pending for jenkins. [~daryn] do you have any comments on latest patch..? bq.I'm not sure if the hostname handling is necessary. You should be able to replace _HOST in the principal with the intended hostname instead of using another config key. As [~lukmajercak] pointed,New config is to support multi-homed environments where hosts are configured with multiple hostnames in DNS or hosts files(like DN HADOOP-12437). AS [~crh] mentioned,ugi caching will done as part connection pool optimisation. so,can we gohead with commit..? > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284-HDFS-13532.013.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, > HDFS-12284.002.patch, HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667645#comment-16667645 ] Brahma Reddy Battula edited comment on HDFS-12284 at 10/29/18 8:00 PM: --- Thanks for working on this jira. IIUC,Daryn was telling about following,for each operaion ugi is getting created(ugi construction). {code:java} 258 UserGroupInformation connUGI = ugi; 259 if (UserGroupInformation.isSecurityEnabled()) { 260 UserGroupInformation routerUser = UserGroupInformation.getLoginUser(); 261 connUGI = UserGroupInformation.createProxyUser( 262 ugi.getUserName(), routerUser); 263 } 264 connection = this.connectionManager.getConnection( 265 connUGI, rpcAddress, proto); {code} {quote}I plan to enhance the connection pooling part by introducing synchronous connection creation using semaphore semantics instead of the current asynchronous connection creation. {quote} Mostly this can address, just we need to aviod when proxy user is already constructed. {quote}The temporary solution for this JIRA is to add the definition of dfs.federation.router.kerberos.internal.spnego.principal to SecurityConfUtil#initSecurity(). Thoughts? {quote} Yes, we should this config like all other configs to start router http server. {quote}We can create another ticket for adding hdfs-rbf-default.xml in HdfsConfiguration, but wondering how it will work for NameNode? Because in a namenode scenario, hdfs-rbf-default.xml may not be in the classpath. {quote} AFAIK..Just one more file ( hdfs-rbf*) will be added to classpath of Namenode,DataNode..I dn't think,user will configure namenode/datanode configs in this file,so this will not impact these process. I think, Newly added testcases are not using the state store( as zk address is not used..) and requests are not going to through router. We should commit this ASAP, as this blocks delegation token impl,[~crh] can you update delegation toke proto type based on this..? was (Author: brahmareddy): Thanks for working on this jira. IIUC,Daryn was telling about following,for each operaion ugi is getting created(ugi construction). {code:java} 258 UserGroupInformation connUGI = ugi; 259 if (UserGroupInformation.isSecurityEnabled()) { 260 UserGroupInformation routerUser = UserGroupInformation.getLoginUser(); 261 connUGI = UserGroupInformation.createProxyUser( 262 ugi.getUserName(), routerUser); 263 } 264 connection = this.connectionManager.getConnection( 265 connUGI, rpcAddress, proto); {code} {quote}I plan to enhance the connection pooling part by introducing synchronous connection creation using semaphore semantics instead of the current asynchronous connection creation. {quote} Mostly this can address, just we need to aviod when proxy user is already constructed. {quote}The temporary solution for this JIRA is to add the definition of dfs.federation.router.kerberos.internal.spnego.principal to SecurityConfUtil#initSecurity(). Thoughts? {quote} Yes, we should this config like all other configs to start router http server. {quote}We can create another ticket for adding hdfs-rbf-default.xml in HdfsConfiguration, but wondering how it will work for NameNode? Because in a namenode scenario, hdfs-rbf-default.xml may not be in the classpath. {quote} AFAIK..Just one more file ( hdfs-rbf*) will be added to classpath of Namenode,DataNode..I dn't think,user will configure namenode/datanode configs in this file,so this will not impact these process. I think, Newly added testcases are not using the state store( as zk address is not used..) We should commit this ASAP, as this blocks delegation token impl,[~crh] can you update delegation toke proto type based on this..? > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, > HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667645#comment-16667645 ] Brahma Reddy Battula commented on HDFS-12284: - Thanks for working on this jira. IIUC,Daryn was telling about following,for each operaion ugi is getting created(ugi construction). {code:java} 258 UserGroupInformation connUGI = ugi; 259 if (UserGroupInformation.isSecurityEnabled()) { 260 UserGroupInformation routerUser = UserGroupInformation.getLoginUser(); 261 connUGI = UserGroupInformation.createProxyUser( 262 ugi.getUserName(), routerUser); 263 } 264 connection = this.connectionManager.getConnection( 265 connUGI, rpcAddress, proto); {code} {quote}I plan to enhance the connection pooling part by introducing synchronous connection creation using semaphore semantics instead of the current asynchronous connection creation. {quote} Mostly this can address, just we need to aviod when proxy user is already constructed. {quote}The temporary solution for this JIRA is to add the definition of dfs.federation.router.kerberos.internal.spnego.principal to SecurityConfUtil#initSecurity(). Thoughts? {quote} Yes, we should this config like all other configs to start router http server. {quote}We can create another ticket for adding hdfs-rbf-default.xml in HdfsConfiguration, but wondering how it will work for NameNode? Because in a namenode scenario, hdfs-rbf-default.xml may not be in the classpath. {quote} AFAIK..Just one more file ( hdfs-rbf*) will be added to classpath of Namenode,DataNode..I dn't think,user will configure namenode/datanode configs in this file,so this will not impact these process. I think, Newly added testcases are not using the state store( as zk address is not used..) We should commit this ASAP, as this blocks delegation token impl,[~crh] can you update delegation toke proto type based on this..? > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284-HDFS-13532.008.patch, > HDFS-12284-HDFS-13532.009.patch, HDFS-12284-HDFS-13532.010.patch, > HDFS-12284-HDFS-13532.011.patch, HDFS-12284-HDFS-13532.012.patch, > HDFS-12284.000.patch, HDFS-12284.001.patch, HDFS-12284.002.patch, > HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13891) Über-jira: RBF stabilisation phase I
[ https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647863#comment-16647863 ] Brahma Reddy Battula commented on HDFS-13891: - Rebased HDFS-13891branch after HDFS-13906 commit. > Über-jira: RBF stabilisation phase I > -- > > Key: HDFS-13891 > URL: https://issues.apache.org/jira/browse/HDFS-13891 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Priority: Major > Labels: RBF > > RBF shipped in 3.0+ and 2.9.. > now its out various corner cases, scale and error handling issues are > surfacing. > this umbrella to fix all those issues and support missing > protocols(HDFS-13655) before next 3.3 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16638027#comment-16638027 ] Brahma Reddy Battula commented on HDFS-13955: - As I commented in HDFS-12284 need to handle for https also, Uploaded the draft for same. > RBF: Support secure Namenode in NamenodeHeartbeatService > > > Key: HDFS-13955 > URL: https://issues.apache.org/jira/browse/HDFS-13955 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-13955-HDFS-13532.000.patch, > HDFS-13955-HDFS-13532.001.patch > > > Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the > Namenodes. We should support HTTPs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService
[ https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13955: Attachment: HDFS-13955-HDFS-13532.001.patch > RBF: Support secure Namenode in NamenodeHeartbeatService > > > Key: HDFS-13955 > URL: https://issues.apache.org/jira/browse/HDFS-13955 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-13955-HDFS-13532.000.patch, > HDFS-13955-HDFS-13532.001.patch > > > Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the > Namenodes. We should support HTTPs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637276#comment-16637276 ] Brahma Reddy Battula commented on HDFS-12284: - Thanks for working on this. At first glance NNHeartbeatservice changes (JMX ) can be done in seperate Jira and need to include the "https" also. Refer org.apache.hadoop.hdfs.DFSUtil#getInfoServer().. for same. will look into details in couple of days. bq.My main concern is [~zhengxg3] saw some issues without doAs for jmx. Would like to understand the issue and figure out why am not seeing the same. May be some config like *"hadoop.security.service.user.name.key"* is missed*..? [~zhengxg3]* can you post the errors and configs used.? > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-12284-HDFS-13532.004.patch, > HDFS-12284-HDFS-13532.005.patch, HDFS-12284-HDFS-13532.006.patch, > HDFS-12284-HDFS-13532.007.patch, HDFS-12284.000.patch, HDFS-12284.001.patch, > HDFS-12284.002.patch, HDFS-12284.003.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13953) Failure of last datanode in the pipeline results in block recovery failure and subsequent NPE during fsck
[ https://issues.apache.org/jira/browse/HDFS-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637221#comment-16637221 ] Brahma Reddy Battula commented on HDFS-13953: - is it related to HDFS-10714? NPE is because fault storages and it's addressed in HDFS-12299.? > Failure of last datanode in the pipeline results in block recovery failure > and subsequent NPE during fsck > - > > Key: HDFS-13953 > URL: https://issues.apache.org/jira/browse/HDFS-13953 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.0 >Reporter: Hrishikesh Gadre >Priority: Major > > A user reported following scenario, > * HBase region server created WAL and attempted to write > * As part of the pipeline write, following events happened, > ** The last data node in the pipeline failed. > ** The region server could not identify this last data node as the root > cause of write failure and instead reported NN the first data node in the > pipeline as the cause of failure. > ** NN created a new write pipeline by replacing the good data node and > retaining the faulty data node. > ** This process continued for three iterations until NN encountered an NPE. > * Now the fsck on the /bhase directory also failing due to NPE in NN > Following stack traces was found in region server logs > {noformat} > WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStaleReplicas(BlockManager.java:3238) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.updateLastBlock(BlockManager.java:3633) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:7374) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:7339) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:777) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.updatePipeline(AuthorizationProviderProxyClientProtocol.java:654){noformat} > > AND > > {noformat} > WARN org.apache.hadoop.hbase.util.FSHDFSUtils: attempt=0 on > file=hdfs://nameservice1/hbase/genie/WALs/ABC,60020,1525325654855-splitting/abc%2C60020%2C1525325654855.null0.1536002440010 > after 6ms > org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction$ReplicaUnderConstruction.isAlive(BlockInfoUnderConstruction.java:121) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction.initializeBlockRecovery(BlockInfoUnderConstruction.java:288) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:4846) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3252) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:3196) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:630) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.recoverLease(AuthorizationProviderProxyClientProtocol.java:372) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:681) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073){noformat} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13840) RBW Blocks which are having less GS should be added to Corrupt
[ https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13840: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.2 3.0.4 3.2.0 Status: Resolved (was: Patch Available) Committed to trunk,branch-3.1 and branch-3.0. [~surendrasingh] thanks for review. > RBW Blocks which are having less GS should be added to Corrupt > -- > > Key: HDFS-13840 > URL: https://issues.apache.org/jira/browse/HDFS-13840 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Minor > Fix For: 3.2.0, 3.0.4, 3.1.2 > > Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, > HDFS-13840-004.patch, HDFS-13840-005.patch, HDFS-13840.patch > > > # Start two DN's (DN1,DN2). > # Write fileA with rep=2 ( dn't close) > # Stop DN1. > # Write some data to fileA. > # restart the DN1 > # Get the blocklocations of fileA. > Here RWR state block will be reported on DN restart and added to locations. > IMO,RWR blocks which having less GS shouldn't added, as they give false > postive (anyway read can be failed as it's genstamp is less) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13790: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.2 2.9.2 2.10.0 Status: Resolved (was: Patch Available) Pushed to branch-3.1,branch-2 and branch-2.9. [~csun] thanks for contribution. > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.1.2 > > Attachments: HDFS-13790-branch-2.000.patch, > HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, > HDFS-13790-branch-3.1.000.patch, HDFS-13790-branch-3.1.001.patch, > HDFS-13790-branch-3.1.002.patch, HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629167#comment-16629167 ] Brahma Reddy Battula commented on HDFS-13790: - +1, On HDFS-13790-branch-3.1.002.patch -> jenkins report also clean. HDFS-13790-branch-2.000.patch and HDFS-13790-branch-2.9.000.patch ,Compiled locally. Going to commit shortly. > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13790-branch-2.000.patch, > HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, > HDFS-13790-branch-3.1.000.patch, HDFS-13790-branch-3.1.001.patch, > HDFS-13790-branch-3.1.002.patch, HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12319) DirectoryScanner will throw IllegalStateException when Multiple BP's are present
[ https://issues.apache.org/jira/browse/HDFS-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627026#comment-16627026 ] Brahma Reddy Battula commented on HDFS-12319: - Yes,we should backport to branch-2.7. > DirectoryScanner will throw IllegalStateException when Multiple BP's are > present > > > Key: HDFS-12319 > URL: https://issues.apache.org/jira/browse/HDFS-12319 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Blocker > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HDFS-12319-001.patch, HDFS-12319-002.patch, > TestCase_to_Reproduce.patch > > > *Scenario:* > Configure "*dfs.datanode.directoryscan.interval*" as *60* and start federated > cluster atleast with two nameservices. > {noformat} > 2017-08-18 19:06:37,150 > [java.util.concurrent.ThreadPoolExecutor$Worker@37d68b4e[State = -1, empty > queue]] ERROR datanode.DirectoryScanner > (DirectoryScanner.java:getDiskReport(551)) - Error compiling report for the > volume, StorageId: DS-258b5e16-caa3-48c8-a0c8-b16934eb8a0c > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > StopWatch is already running > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:542) > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:392) > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:373) > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:318) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalStateException: StopWatch is already running > at org.apache.hadoop.util.StopWatch.start(StopWatch.java:49) > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:612) > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:579) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622343#comment-16622343 ] Brahma Reddy Battula commented on HDFS-13790: - [~csun] thanks uploading patch. can you please handle check-style issues..? > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13790-branch-2.000.patch, > HDFS-13790-branch-2.9.000.patch, HDFS-13790-branch-2.9.001.patch, > HDFS-13790-branch-3.1.000.patch, HDFS-13790-branch-3.1.001.patch, > HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619561#comment-16619561 ] Brahma Reddy Battula commented on HDFS-13790: - Yes,There is a Jira HADOOP-13951,I commented same. [~csun], can you please update branch-2 and branch-3.1 patch correctly.branch-2.9 cleanly applies > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13790-branch-2.9.000.patch, > HDFS-13790-branch-2.9.001.patch, HDFS-13790-branch-3.1.000.patch, > HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13790: Parent Issue: HDFS-12615 (was: HDFS-13655) > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13790-branch-2.9.000.patch, > HDFS-13790-branch-2.9.001.patch, HDFS-13790-branch-3.1.000.patch, > HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13891) Über-jira: RBF stabilisation phase I
[ https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619394#comment-16619394 ] Brahma Reddy Battula commented on HDFS-13891: - Created the branch. So we can start committing to HDFS-13891. > Über-jira: RBF stabilisation phase I > -- > > Key: HDFS-13891 > URL: https://issues.apache.org/jira/browse/HDFS-13891 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Priority: Major > Labels: RBF > > RBF shipped in 3.0+ and 2.9.. > now its out various corner cases, scale and error handling issues are > surfacing. > this umbrella to fix all those issues and support missing > protocols(HDFS-13655) before next 3.3 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13891) Über-jira: RBF stabilisation phase I
[ https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13891: Description: RBF shipped in 3.0+ and 2.9.. now its out various corner cases, scale and error handling issues are surfacing. this umbrella to fix all those issues and support missing protocols(HDFS-13655) before next 3.3 release. was:RBF shipped in 3.0+ and 2.9..now its out various corner cases, scale and error handling issues are surfacing. this umbrella to fix all those issues before next 3.3 release. > Über-jira: RBF stabilisation phase I > -- > > Key: HDFS-13891 > URL: https://issues.apache.org/jira/browse/HDFS-13891 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Brahma Reddy Battula >Priority: Major > Labels: RBF > > RBF shipped in 3.0+ and 2.9.. > now its out various corner cases, scale and error handling issues are > surfacing. > this umbrella to fix all those issues and support missing > protocols(HDFS-13655) before next 3.3 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs
[ https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619362#comment-16619362 ] Brahma Reddy Battula commented on HDFS-13776: - IMO,we can add client protocol support in HDFS-13891 so that all the client-protocols can be addressed.(and no need to have seperate branch for this,as already we've two branches for RBF).If you agree, I will add in that umbrella. And I didn't get the extra advantage over having separate class (RouterStoragePolicy) ,even this will create one extra object which is almost similar to RouterClientProtocol. anyway now router clientprotocol isn't heavy. > RBF: Add Storage policies related ClientProtocol APIs > - > > Key: HDFS-13776 > URL: https://issues.apache.org/jira/browse/HDFS-13776 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Attachments: HDFS-13776-000.patch, HDFS-13776-001.patch, > HDFS-13776-002.patch, HDFS-13776-003.patch, HDFS-13776-004.patch, > HDFS-13776-005.patch, HDFS-13776-006.patch > > > Currently unsetStoragePolicy and getStoragePolicy are not implemented in > RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619084#comment-16619084 ] Brahma Reddy Battula commented on HDFS-13790: - [~csun] thanks for updating the patches. Uploaded branch-2.9 patch again to trigger the jenkins. can you update the branch-3.1 patch(Remove SPS related changes which is not merged to branch-3.1).? > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13790-branch-2.9.000.patch, > HDFS-13790-branch-2.9.001.patch, HDFS-13790-branch-3.1.000.patch, > HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module
[ https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13790: Attachment: HDFS-13790-branch-2.9.001.patch > RBF: Move ClientProtocol APIs to its own module > --- > > Key: HDFS-13790 > URL: https://issues.apache.org/jira/browse/HDFS-13790 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13790-branch-2.9.000.patch, > HDFS-13790-branch-2.9.001.patch, HDFS-13790-branch-3.1.000.patch, > HDFS-13790.000.patch, HDFS-13790.001.patch > > > {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} > isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} > should have its own {{RouterClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617749#comment-16617749 ] Brahma Reddy Battula edited comment on HDFS-13532 at 9/17/18 4:14 PM: -- [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} {quote}bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. {quote} and once after updating in statestore then we can return ack to the client. was (Author: brahmareddy): [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} bq.Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. and once after updating in statestore then we can return ack to the client. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617749#comment-16617749 ] Brahma Reddy Battula edited comment on HDFS-13532 at 9/17/18 4:13 PM: -- [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} bq.Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. and once after updating in statestore then we can return ack to the client. was (Author: brahmareddy): [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. and once after updating in statestore then we can return ack to the client. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617749#comment-16617749 ] Brahma Reddy Battula edited comment on HDFS-13532 at 9/17/18 4:13 PM: -- [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. bq. Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. and once after updating in statestore then we can return ack to the client. was (Author: brahmareddy): [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}{quote} Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} {quote} Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. {quote}{quote} and once after updating in statestore then we can return ack to the client. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617749#comment-16617749 ] Brahma Reddy Battula edited comment on HDFS-13532 at 9/17/18 4:13 PM: -- [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. and once after updating in statestore then we can return ack to the client. was (Author: brahmareddy): [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. bq. Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. and once after updating in statestore then we can return ack to the client. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617749#comment-16617749 ] Brahma Reddy Battula edited comment on HDFS-13532 at 9/17/18 4:12 PM: -- [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid..?As Router also have token(act as proxy user) so auth can be done through token. {quote}{quote} Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. {quote} {quote} Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. {quote}{quote} and once after updating in statestore then we can return ack to the client. was (Author: brahmareddy): [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid, as Router also token(act as proxy user) so auth can be done through token. {quote}bq. Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. {quote} and once after updating in statestore then we can return ack to the client. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617749#comment-16617749 ] Brahma Reddy Battula commented on HDFS-13532: - [~crh] thanks for updating. As discussed in call, Following Cons for approach 1 are still valid, as Router also token(act as proxy user) so auth can be done through token. {quote}bq. Without delegation token use namenodes will end up putting all the load on KDC for kerberos ticket verification. This will defeat one of the main rationales behind why delegation tokens were introduced in namenode. bq. Performance of namenodes will deteriorate further as network calls need to be made to kdc for ticket verification instead of in memory cache of delegation tokens that is maintained currently. {quote} and once after updating in statestore then we can return ack to the client. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, RBF _ Security delegation > token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13914) Fix DN UI logs link broken when https is enabled after HDFS-13902
[ https://issues.apache.org/jira/browse/HDFS-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613716#comment-16613716 ] Brahma Reddy Battula commented on HDFS-13914: - +1,same should be applicable to namenode logs link.. [~jiangjianfei] thanks for reporting. Yes,loglevel also can be useful as of now it's like hidden. > Fix DN UI logs link broken when https is enabled after HDFS-13902 > - > > Key: HDFS-13914 > URL: https://issues.apache.org/jira/browse/HDFS-13914 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.2.0 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Minor > Labels: patch > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2 > > Attachments: HDFS-13914_001.patch > > > The bug that DN UI logs link is broken when https is enabled is fixed by > HDFS-13581, however, after fixing HDFS-13902, this bug appears again. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13902) Add JMX, conf and stacks menus to the datanode page
[ https://issues.apache.org/jira/browse/HDFS-13902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13902: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.9.2 Status: Resolved (was: Patch Available) Committed to trunk through branch-2.9. [~fengchuang] thanks for contribution and thanks [~elgoiri] for additional review. > Add JMX, conf and stacks menus to the datanode page > > > Key: HDFS-13902 > URL: https://issues.apache.org/jira/browse/HDFS-13902 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.3 >Reporter: fengchuang >Assignee: fengchuang >Priority: Minor > Fix For: 2.9.2 > > Attachments: HDFS-13902.001.patch > > > Add JMX, conf and stacks menus to the datanode page. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610099#comment-16610099 ] Brahma Reddy Battula commented on HDFS-13237: - Committed to trunk and branch-3.1, [~elgoiri] thanks for contributions and thanks to other for additional review. > [Documentation] RBF: Mount points across multiple subclusters > - > > Key: HDFS-13237 > URL: https://issues.apache.org/jira/browse/HDFS-13237 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.2.0, 3.1.2 > > Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, > HDFS-13237.002.patch, HDFS-13237.003.patch, HDFS-13237.004.patch, > HDFS-13237.005.patch > > > Document the feature to spread mount points across multiple subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13237: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.2 3.2.0 Status: Resolved (was: Patch Available) > [Documentation] RBF: Mount points across multiple subclusters > - > > Key: HDFS-13237 > URL: https://issues.apache.org/jira/browse/HDFS-13237 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.2.0, 3.1.2 > > Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, > HDFS-13237.002.patch, HDFS-13237.003.patch, HDFS-13237.004.patch, > HDFS-13237.005.patch > > > Document the feature to spread mount points across multiple subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16610097#comment-16610097 ] Brahma Reddy Battula commented on HDFS-13237: - +1, Committing shortly. > [Documentation] RBF: Mount points across multiple subclusters > - > > Key: HDFS-13237 > URL: https://issues.apache.org/jira/browse/HDFS-13237 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, > HDFS-13237.002.patch, HDFS-13237.003.patch, HDFS-13237.004.patch, > HDFS-13237.005.patch > > > Document the feature to spread mount points across multiple subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13237: Parent Issue: HDFS-12615 (was: HDFS-13891) > [Documentation] RBF: Mount points across multiple subclusters > - > > Key: HDFS-13237 > URL: https://issues.apache.org/jira/browse/HDFS-13237 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, > HDFS-13237.002.patch, HDFS-13237.003.patch, HDFS-13237.004.patch, > HDFS-13237.005.patch > > > Document the feature to spread mount points across multiple subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609635#comment-16609635 ] Brahma Reddy Battula commented on HDFS-13237: - latest patch,lgtm, looks one of wei yan comment is not addresse ("RANDOM can balance both READ and WRITE workload across subcusters, not just reading workload, right?") > [Documentation] RBF: Mount points across multiple subclusters > - > > Key: HDFS-13237 > URL: https://issues.apache.org/jira/browse/HDFS-13237 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, > HDFS-13237.002.patch, HDFS-13237.003.patch, HDFS-13237.004.patch > > > Document the feature to spread mount points across multiple subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13237: Parent Issue: HDFS-13891 (was: HDFS-12615) > [Documentation] RBF: Mount points across multiple subclusters > - > > Key: HDFS-13237 > URL: https://issues.apache.org/jira/browse/HDFS-13237 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13237.000.patch, HDFS-13237.001.patch, > HDFS-13237.002.patch, HDFS-13237.003.patch, HDFS-13237.004.patch > > > Document the feature to spread mount points across multiple subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13577) RBF: Failed mount point operations, returns wrong exit code.
[ https://issues.apache.org/jira/browse/HDFS-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13577: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Failed mount point operations, returns wrong exit code. > > > Key: HDFS-13577 > URL: https://issues.apache.org/jira/browse/HDFS-13577 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Y. SREENIVASULU REDDY >Assignee: Dibyendu Karmakar >Priority: Major > Labels: RBF > > If client is performed add mount point with some special character, mount > point add is failed. > And prints the message like > {noformat} > 18/05/17 09:58:34 DEBUG ipc.ProtobufRpcEngine: Call: addMountTableEntry took > 19ms Cannot add mount point /testSpecialCharMountPointCreation/test/ > {noformat} > In the above case it should return the exist code is non zero value. > {code:java|title=RouterAdmin.java|borderStyle=solid} > Exception debugException = null; > exitCode = 0; > try { > if ("-add".equals(cmd)) { > if (addMount(argv, i)) { > System.out.println("Successfully added mount point " + argv[i]); > } > {code} > we should handle this kind of cases also. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
[ https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-12716: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.10.0 Status: Resolved (was: Patch Available) Committed to branch-2.. Jenkins precommit is having issue with branch-2,see HADOOP-13951 for details. I verified the fix locally..there was one whitespace, fixed while committing. [~RANith] thanks for contribution. > 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes > to be available > - > > Key: HDFS-12716 > URL: https://issues.apache.org/jira/browse/HDFS-12716 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: usharani >Assignee: Ranith Sardar >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2 > > Attachments: HDFS-12716-branch-2.patch, HDFS-12716.002.patch, > HDFS-12716.003.patch, HDFS-12716.004.patch, HDFS-12716.005.patch, > HDFS-12716.006.patch, HDFS-12716.patch, HDFS-12716_branch-2.patch > > > Currently 'dfs.datanode.failed.volumes.tolerated' supports number of > tolerated failed volumes to be mentioned. This configuration change requires > restart of datanode. Since datanode volumes can be changed dynamically, > keeping this configuration same for all may not be good idea. > Support 'dfs.datanode.failed.volumes.tolerated' to accept special > 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13902) Add JMX, conf and stacks menus to the datanode page
[ https://issues.apache.org/jira/browse/HDFS-13902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608526#comment-16608526 ] Brahma Reddy Battula commented on HDFS-13902: - Journalnode improvement can be handle in separate jira.[~elgoiri],if you agree,I can gohead with commit. > Add JMX, conf and stacks menus to the datanode page > > > Key: HDFS-13902 > URL: https://issues.apache.org/jira/browse/HDFS-13902 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.3 >Reporter: fengchuang >Assignee: fengchuang >Priority: Minor > Attachments: HDFS-13902.001.patch > > > Add JMX, conf and stacks menus to the datanode page. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available
[ https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608525#comment-16608525 ] Brahma Reddy Battula commented on HDFS-12716: - Looks some problem branch-2 jenkins. Again I triggered the jenkins. > 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes > to be available > - > > Key: HDFS-12716 > URL: https://issues.apache.org/jira/browse/HDFS-12716 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: usharani >Assignee: Ranith Sardar >Priority: Major > Fix For: 3.2.0, 3.0.4, 3.1.2 > > Attachments: HDFS-12716-branch-2.patch, HDFS-12716.002.patch, > HDFS-12716.003.patch, HDFS-12716.004.patch, HDFS-12716.005.patch, > HDFS-12716.006.patch, HDFS-12716.patch, HDFS-12716_branch-2.patch > > > Currently 'dfs.datanode.failed.volumes.tolerated' supports number of > tolerated failed volumes to be mentioned. This configuration change requires > restart of datanode. Since datanode volumes can be changed dynamically, > keeping this configuration same for all may not be good idea. > Support 'dfs.datanode.failed.volumes.tolerated' to accept special > 'negative value 'x' to tolerate failures of upto "n-x" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands
[ https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13862: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.2 3.2.0 Status: Resolved (was: Patch Available) Committed to Trunk and branch-3.1. [~ayushtkn] thanks for contribution,[~SoumyaPN] thanks for reporting and thanks to [~elgoiri] for additional review. > RBF: Router logs are not capturing few of the dfsrouteradmin commands > - > > Key: HDFS-13862 > URL: https://issues.apache.org/jira/browse/HDFS-13862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Soumyapn >Assignee: Ayush Saxena >Priority: Major > Labels: RBF > Fix For: 3.2.0, 3.1.2 > > Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch, > HDFS-13862-03.patch, HDFS-13862-04.patch, HDFS-13862-05.patch > > > Test Steps : > Below commands are not getting captured in the Router logs. > # Destination entry name in the add command. Log says "Added new mount point > /apps9 to resolver". > # Safemode enter|leave|get commands > # nameservice enable -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608515#comment-16608515 ] Brahma Reddy Battula commented on HDFS-13532: - [~crh] thanks for organising the meeting and detailed design doc.Hope you can update MOM. I am favourable to approach 1(which complete moves token life cycle to Router).As, we need to consider the additional cost also. 1) Please update cons for approach 1. 2) For syncing token across the routers,May be we can refresh/sync thread like HDFS-13443 3) Might need to handle KMS token also.? [~daryn]/[~lmccay]/[~vinayrpet] if you get chance,kindly review the design. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: CR Hota >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ > Security delegation token thoughts_updated.pdf, > RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands
[ https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608502#comment-16608502 ] Brahma Reddy Battula commented on HDFS-13862: - +1,Latest patch.Pending for jenkins. > RBF: Router logs are not capturing few of the dfsrouteradmin commands > - > > Key: HDFS-13862 > URL: https://issues.apache.org/jira/browse/HDFS-13862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Soumyapn >Assignee: Ayush Saxena >Priority: Major > Labels: RBF > Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch, > HDFS-13862-03.patch, HDFS-13862-04.patch, HDFS-13862-05.patch > > > Test Steps : > Below commands are not getting captured in the Router logs. > # Destination entry name in the add command. Log says "Added new mount point > /apps9 to resolver". > # Safemode enter|leave|get commands > # nameservice enable -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13834) RBF: Connection creator thread should catch Throwable
[ https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13834: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Connection creator thread should catch Throwable > - > > Key: HDFS-13834 > URL: https://issues.apache.org/jira/browse/HDFS-13834 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: CR Hota >Priority: Critical > Attachments: HDFS-13834.0.patch, HDFS-13834.1.patch > > > Connection creator thread is a single thread thats responsible for creating > all downstream namenode connections. > This is very critical thread and hence should not die understand > exception/error scenarios. > We saw this behavior in production systems where the thread died leaving the > router process in bad state. > The thread should also catch a generic error/exception. > {code} > @Override > public void run() { > while (this.running) { > try { > ConnectionPool pool = this.queue.take(); > try { > int total = pool.getNumConnections(); > int active = pool.getNumActiveConnections(); > if (pool.getNumConnections() < pool.getMaxSize() && > active >= MIN_ACTIVE_RATIO * total) { > ConnectionContext conn = pool.newConnection(); > pool.addConnection(conn); > } else { > LOG.debug("Cannot add more than {} connections to {}", > pool.getMaxSize(), pool); > } > } catch (IOException e) { > LOG.error("Cannot create a new connection", e); > } > } catch (InterruptedException e) { > LOG.error("The connection creator was interrupted"); > this.running = false; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13469) RBF: Support InodeID in the Router
[ https://issues.apache.org/jira/browse/HDFS-13469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13469: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Support InodeID in the Router > -- > > Key: HDFS-13469 > URL: https://issues.apache.org/jira/browse/HDFS-13469 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Major > > The Namenode supports identifying files through inode identifiers. > Currently the Router does not handle this properly, we need to add this > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13219) RBF: Cluster information on Router is not correct when the Federation shares datanodes
[ https://issues.apache.org/jira/browse/HDFS-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13219: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Cluster information on Router is not correct when the Federation shares > datanodes > -- > > Key: HDFS-13219 > URL: https://issues.apache.org/jira/browse/HDFS-13219 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.9.0 >Reporter: Tao Jie >Priority: Major > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, > screenshot-4.png, screenshot-5.png > > > Now summary information on Router website aggregates summary of each > nameservice. However in a typical federation cluster deployment, datanodes > are shared among nameservices. Consider we have 2 namespaces and 100 > datanodes in one cluster. 100 datanodes are available for each namespace, but > we see 200 datanodes on the router website. So does other information such as > {{Total capacity}}, {{Remaining capacity}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13270) RBF: Router audit logger
[ https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13270: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Router audit logger > > > Key: HDFS-13270 > URL: https://issues.apache.org/jira/browse/HDFS-13270 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.2.0 >Reporter: maobaolong >Priority: Major > > We can use router auditlogger to log the client info and cmd, because the > FSNamesystem#Auditlogger's log think the client are all from router. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13254) RBF: Cannot mv/cp file cross namespace
[ https://issues.apache.org/jira/browse/HDFS-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13254: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Cannot mv/cp file cross namespace > -- > > Key: HDFS-13254 > URL: https://issues.apache.org/jira/browse/HDFS-13254 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > When I try to mv a file from a namespace to another, the client return an > error. > > Do we have any plan to support cp/mv file cross namespace? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13274) RBF: Extend RouterRpcClient to use multiple sockets
[ https://issues.apache.org/jira/browse/HDFS-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13274: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Extend RouterRpcClient to use multiple sockets > --- > > Key: HDFS-13274 > URL: https://issues.apache.org/jira/browse/HDFS-13274 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > > HADOOP-13144 introduces the ability to create multiple connections for the > same user and use different sockets. The RouterRpcClient should use this > approach to get a better throughput. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13248: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13255) RBF: Fail when try to remove mount point paths
[ https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13255: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Fail when try to remove mount point paths > -- > > Key: HDFS-13255 > URL: https://issues.apache.org/jira/browse/HDFS-13255 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > when delete a ns-fed path which include mount point paths, will issue a error. > Need to delete each mount point path independently. > Operation step: > {code:java} > [hadp@root]$ hdfs dfsrouteradmin -ls > Mount Table Entries: > Source Destinations Owner Group Mode Quota/Usage > /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, > SsQuota: -/-] > /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, > SsQuota: -/-] > [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ > Found 2 items > -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml > -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml > [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/ > Found 2 items > -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 > hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt > -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 > hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt > [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/ > Found 2 items > -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml > -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 > hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml > [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/ > rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using > -skipTrash option > [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/ > rm: `hdfs://ns-fed/rm-test-all': Input/output error > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation
[ https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13245: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: State store DBMS implementation > > > Key: HDFS-13245 > URL: https://issues.apache.org/jira/browse/HDFS-13245 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: maobaolong >Assignee: Yiran Wu >Priority: Major > Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, > HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, > HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, > HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, > HDFS-13245.012.patch > > > Add a DBMS implementation for the State Store. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13278) RBF: Correct the logic of mount validate to avoid the bad mountPoint
[ https://issues.apache.org/jira/browse/HDFS-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608499#comment-16608499 ] Brahma Reddy Battula commented on HDFS-13278: - [~maobaolong] can you please confirm? > RBF: Correct the logic of mount validate to avoid the bad mountPoint > > > Key: HDFS-13278 > URL: https://issues.apache.org/jira/browse/HDFS-13278 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.2.0 >Reporter: maobaolong >Priority: Major > Labels: RBF > > Correct the logic of mount validate to avoid the bad mountPoint. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13278) RBF: Correct the logic of mount validate to avoid the bad mountPoint
[ https://issues.apache.org/jira/browse/HDFS-13278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13278: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Correct the logic of mount validate to avoid the bad mountPoint > > > Key: HDFS-13278 > URL: https://issues.apache.org/jira/browse/HDFS-13278 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Affects Versions: 3.2.0 >Reporter: maobaolong >Priority: Major > Labels: RBF > > Correct the logic of mount validate to avoid the bad mountPoint. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13443: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-branch-2.001.patch, > HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, > HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, > HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, > HDFS-13443.009.patch, HDFS-13443.010.patch, HDFS-13443.011.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13470) RBF: Add Browse the Filesystem button to the UI
[ https://issues.apache.org/jira/browse/HDFS-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13470: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Add Browse the Filesystem button to the UI > --- > > Key: HDFS-13470 > URL: https://issues.apache.org/jira/browse/HDFS-13470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13470.000.patch > > > After HDFS-12512 added WebHDFS, we can add the support to browse the > filesystem to the UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13495) RBF: Support Router Admin REST API
[ https://issues.apache.org/jira/browse/HDFS-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13495: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: Support Router Admin REST API > -- > > Key: HDFS-13495 > URL: https://issues.apache.org/jira/browse/HDFS-13495 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > > This JIRA intends to add REST API support for all admin commands. Router > Admin REST APIs can be useful in managing the Routers from a central > management layer tool. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13507) RBF: Remove update functionality from routeradmin's add cmd
[ https://issues.apache.org/jira/browse/HDFS-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13507: Parent Issue: HDFS-13891 (was: HDFS-13815) > RBF: Remove update functionality from routeradmin's add cmd > --- > > Key: HDFS-13507 > URL: https://issues.apache.org/jira/browse/HDFS-13507 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Gang Li >Priority: Minor > Labels: incompatible > Attachments: HDFS-13507.000.patch, HDFS-13507.001.patch, > HDFS-13507.002.patch > > > Follow up the discussion in HDFS-13326. We should remove the "update" > functionality from routeradmin's add cmd, to make it consistent with RPC > calls. > Note that: this is an incompatible change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13507) RBF: Remove update functionality from routeradmin's add cmd
[ https://issues.apache.org/jira/browse/HDFS-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13507: Parent Issue: HDFS-13815 (was: HDFS-12615) > RBF: Remove update functionality from routeradmin's add cmd > --- > > Key: HDFS-13507 > URL: https://issues.apache.org/jira/browse/HDFS-13507 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Gang Li >Priority: Minor > Labels: incompatible > Attachments: HDFS-13507.000.patch, HDFS-13507.001.patch, > HDFS-13507.002.patch > > > Follow up the discussion in HDFS-13326. We should remove the "update" > functionality from routeradmin's add cmd, to make it consistent with RPC > calls. > Note that: this is an incompatible change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13577) RBF: Failed mount point operations, returns wrong exit code.
[ https://issues.apache.org/jira/browse/HDFS-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608496#comment-16608496 ] Brahma Reddy Battula commented on HDFS-13577: - HDFS-13815 is addressed this issue, can you guys cross check once..? > RBF: Failed mount point operations, returns wrong exit code. > > > Key: HDFS-13577 > URL: https://issues.apache.org/jira/browse/HDFS-13577 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Y. SREENIVASULU REDDY >Assignee: Dibyendu Karmakar >Priority: Major > Labels: RBF > > If client is performed add mount point with some special character, mount > point add is failed. > And prints the message like > {noformat} > 18/05/17 09:58:34 DEBUG ipc.ProtobufRpcEngine: Call: addMountTableEntry took > 19ms Cannot add mount point /testSpecialCharMountPointCreation/test/ > {noformat} > In the above case it should return the exist code is non zero value. > {code:java|title=RouterAdmin.java|borderStyle=solid} > Exception debugException = null; > exitCode = 0; > try { > if ("-add".equals(cmd)) { > if (addMount(argv, i)) { > System.out.println("Successfully added mount point " + argv[i]); > } > {code} > we should handle this kind of cases also. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
[ https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-13853: Parent Issue: HDFS-13891 (was: HDFS-12615) > RBF: RouterAdmin update cmd is overwriting the entry not updating the existing > -- > > Key: HDFS-13853 > URL: https://issues.apache.org/jira/browse/HDFS-13853 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > {code:java} > // Create a new entry > Map destMap = new LinkedHashMap<>(); > for (String ns : nss) { > destMap.put(ns, dest); > } > MountTable newEntry = MountTable.newInstance(mount, destMap); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2
[ https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608492#comment-16608492 ] Brahma Reddy Battula commented on HDFS-12615: - Thanks [~elgoiri] and [~anu] .. will move out and close this. > Router-based HDFS federation phase 2 > > > Key: HDFS-12615 > URL: https://issues.apache.org/jira/browse/HDFS-12615 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > > This umbrella JIRA tracks set of improvements over the Router-based HDFS > federation (HDFS-10467). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands
[ https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608490#comment-16608490 ] Brahma Reddy Battula commented on HDFS-13862: - Yes, but commands are different right.. you can keep the format same.. "dfsadmin" will not work with router right..? > RBF: Router logs are not capturing few of the dfsrouteradmin commands > - > > Key: HDFS-13862 > URL: https://issues.apache.org/jira/browse/HDFS-13862 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Soumyapn >Assignee: Ayush Saxena >Priority: Major > Labels: RBF > Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch, > HDFS-13862-03.patch, HDFS-13862-04.patch > > > Test Steps : > Below commands are not getting captured in the Router logs. > # Destination entry name in the add command. Log says "Added new mount point > /apps9 to resolver". > # Safemode enter|leave|get commands > # nameservice enable -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org