Hadoop 3.2 Release Plan proposal
Hi All, To continue a faster cadence of releases to accommodate more features, we could plan a Hadoop 3.2 release around August end. To start the process sooner, and to establish a timeline, I propose to target Hadoop 3.2.0 release by August end 2018. (About 1.5 months from now). I would also would like to take this opportunity to come up with a detailed plan. - Feature freeze date : all features should be merged by August 10, 2018. - Code freeze date : blockers/critical only, no improvements and non blocker/critical bug-fixes August 24, 2018. - Release date: August 31, 2018 I have tried to come up with a list of features on my radar which could be candidates for a 3.2 release: - YARN-3409, Node Attributes support. (Owner: Naganarasimha/Sunil) - YARN-8135, Hadoop Submarine project for DeepLearning workloads in YARN (Owner: Wangda Tan) - YARN Native Service / Docker feature hardening and stabilization works in YARN There are several other HDFS features want to be released with 3.2 as well, I am quoting few here: - HDFS-10285 Storage Policy Satisfier (Owner: Uma/Rakesh) - Improvements to HDFS-12615 Router-based HDFS federation Please let me know if I missed any features targeted to 3.2 per this timeline. I would like to volunteer myself as release manager of 3.2.0 release. Please let me know if you have any suggestions. Thanks, Sunil Govindan
[jira] [Created] (HDFS-13738) fsck -list-corruptfileblocks has infinite loop if user is not privileged.
Wei-Chiu Chuang created HDFS-13738: -- Summary: fsck -list-corruptfileblocks has infinite loop if user is not privileged. Key: HDFS-13738 URL: https://issues.apache.org/jira/browse/HDFS-13738 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 3.0.0, 2.6.0 Environment: Kerberized Hadoop cluster Reporter: Wei-Chiu Chuang Execute following command as nay non-privileged user: {noformat} # create an empty directory $ hdfs dfs -mkdir /tmp/fsck_test # run fsck $ hdfs fsck /tmp/fsck_test -list-corruptfileblocks {noformat} {noformat} FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds Access denied for user systest. Superuser privilege is required Fsck on path '/tmp' FAILED FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 0 milliseconds Access denied for user systest. Superuser privilege is required Fsck on path '/tmp' FAILED FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds Access denied for user systest. Superuser privilege is required Fsck on path '/tmp' FAILED {noformat} Reproducible on Hadoop 3.0.0 as well as 2.6.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-13690) Improve error message when creating encryption zone while KMS is unreachable
[ https://issues.apache.org/jira/browse/HDFS-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen resolved HDFS-13690. -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Committed to trunk. Thanks for the contribution [~knanasi] ! > Improve error message when creating encryption zone while KMS is unreachable > > > Key: HDFS-13690 > URL: https://issues.apache.org/jira/browse/HDFS-13690 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, hdfs, kms >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Minor > Fix For: 3.2.0 > > Attachments: HDFS-13690.001.patch, HDFS-13690.002.patch, > HDFS-13690.003.patch, HDFS-13690.004.patch > > > In failure testing, we stopped the KMS and then tried to run some encryption > related commands. > {{hdfs crypto -createZone}} will complain with a short "RemoteException: > Connection refused." This message could be improved to explain that we cannot > connect to the KMSClientProvier. > For example, {{hadoop key list}} while KMS is down will error: > {code} > -bash-4.1$ hadoop key list > Cannot list keys for KeyProvider: > KMSClientProvider[http://hdfs-cdh5-vanilla-1.vpc.cloudera.com:16000/kms/v1/]: > Connection refusedjava.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:579) > at sun.net.NetworkClient.doConnect(NetworkClient.java:175) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) > at sun.net.www.http.HttpClient.(HttpClient.java:211) > at sun.net.www.http.HttpClient.New(HttpClient.java:308) > at sun.net.www.http.HttpClient.New(HttpClient.java:326) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850) > at > org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125) > at > org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216) > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479) > at > org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286) > at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org