Re: [DISCUSS] Move to gitbox
+1 (non-binding) On 12/11/18, 9:07 PM, "Brahma Reddy Battula" wrote: +1 On Sat, Dec 8, 2018 at 1:26 PM, Akira Ajisaka wrote: > Hi all, > > Apache Hadoop git repository is in git-wip-us server and it will be > decommissioned. > If there are no objection, I'll file a JIRA ticket with INFRA to > migrate to https://gitbox.apache.org/ and update documentation. > > According to ASF infra team, the timeframe is as follows: > > > - December 9th 2018 -> January 9th 2019: Voluntary (coordinated) > relocation > > - January 9th -> February 6th: Mandated (coordinated) relocation > > - February 7th: All remaining repositories are mass migrated. > > This timeline may change to accommodate various scenarios. > > If we got consensus by January 9th, I can file a ticket with INFRA and > migrate it. > Even if we cannot got consensus, the repository will be migrated by > February 7th. > > Regards, > Akira > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > -- --Brahma Reddy Battula
Re: [DISCUSS] Move to gitbox
+1 Thanks, Bharat On 12/11/18, 9:07 PM, "Brahma Reddy Battula" wrote: +1 On Sat, Dec 8, 2018 at 1:26 PM, Akira Ajisaka wrote: > Hi all, > > Apache Hadoop git repository is in git-wip-us server and it will be > decommissioned. > If there are no objection, I'll file a JIRA ticket with INFRA to > migrate to https://gitbox.apache.org/ and update documentation. > > According to ASF infra team, the timeframe is as follows: > > > - December 9th 2018 -> January 9th 2019: Voluntary (coordinated) > relocation > > - January 9th -> February 6th: Mandated (coordinated) relocation > > - February 7th: All remaining repositories are mass migrated. > > This timeline may change to accommodate various scenarios. > > If we got consensus by January 9th, I can file a ticket with INFRA and > migrate it. > Even if we cannot got consensus, the repository will be migrated by > February 7th. > > Regards, > Akira > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > -- --Brahma Reddy Battula
Re: [DISCUSS] Move to gitbox
+1 On Sat, Dec 8, 2018 at 1:26 PM, Akira Ajisaka wrote: > Hi all, > > Apache Hadoop git repository is in git-wip-us server and it will be > decommissioned. > If there are no objection, I'll file a JIRA ticket with INFRA to > migrate to https://gitbox.apache.org/ and update documentation. > > According to ASF infra team, the timeframe is as follows: > > > - December 9th 2018 -> January 9th 2019: Voluntary (coordinated) > relocation > > - January 9th -> February 6th: Mandated (coordinated) relocation > > - February 7th: All remaining repositories are mass migrated. > > This timeline may change to accommodate various scenarios. > > If we got consensus by January 9th, I can file a ticket with INFRA and > migrate it. > Even if we cannot got consensus, the repository will be migrated by > February 7th. > > Regards, > Akira > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > -- --Brahma Reddy Battula
[jira] [Created] (HADOOP-16000) Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols
Akira Ajisaka created HADOOP-16000: -- Summary: Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols Key: HADOOP-16000 URL: https://issues.apache.org/jira/browse/HADOOP-16000 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Akira Ajisaka {code:title=core-default.xml} public static final String SSL_ENABLED_PROTOCOLS_DEFAULT = "TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2"; {code} TLSv1 and SSLv2Hello are considered to be vulnerable. Let's remove these by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15841) ABFS: change createRemoteFileSystemDuringInitialization default to true
[ https://issues.apache.org/jira/browse/HADOOP-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory resolved HADOOP-15841. Resolution: Won't Fix > ABFS: change createRemoteFileSystemDuringInitialization default to true > --- > > Key: HADOOP-15841 > URL: https://issues.apache.org/jira/browse/HADOOP-15841 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > > I haven't seen a way to create a working container (at least for the dfs > endpoint) except for setting > fs.azure.createRemoteFileSystemDuringInitialization=true. I personally don't > see that much of a downside to having it default to true, and it's a mild > inconvenience to remember to set it to true for some action to create a > container. I vaguely recall [~tmarquardt] considering changing this default > too. > I propose we do it? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15999) [s3a] Better support for out-of-band operations
Sean Mackrory created HADOOP-15999: -- Summary: [s3a] Better support for out-of-band operations Key: HADOOP-15999 URL: https://issues.apache.org/jira/browse/HADOOP-15999 Project: Hadoop Common Issue Type: New Feature Reporter: Sean Mackrory S3Guard was initially done on the premise that a new MetadataStore would be the source of truth, and that it wouldn't provide guarantees if updates were done without using S3Guard. I've been seeing increased demand for better support for scenarios where operations are done on the data that can't reasonably be done with S3Guard involved. For example: * A file is deleted using S3Guard, and replaced by some other tool. S3Guard can't tell the difference between the new file and delete / list inconsistency and continues to treat the file as deleted. * An S3Guard-ed file is overwritten by a longer file by some other tool. When reading the file, only the length of the original file is read. We could possibly have smarter behavior here by querying both S3 and the MetadataStore (even in cases where we may currently only query the MetadataStore in getFileStatus) and use whichever one has the higher modified time. This kills the performance boost we currently get in some workloads with the short-circuited getFileStatus, but we could keep it with authoritative mode which should give a larger performance boost. At least we'd get more correctness without authoritative mode and a clear declaration of when we can make the assumptions required to short-circuit the process. If we can't consider S3Guard the source of truth, we need to defer to S3 more. We'd need to be extra sure of any locality / time zone issues if we start relying on mod_time more directly, but currently we're tracking the modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [DISCUSS] Move to gitbox
+1. On Tue, Dec 11, 2018 at 12:10 AM Mukul Kumar Singh wrote: > +1 > -Mukul > > On 12/11/18, 9:56 AM, "Weiwei Yang" wrote: > > +1 > > On Tue, Dec 11, 2018 at 10:51 AM Anu Engineer < > aengin...@hortonworks.com> > wrote: > > > +1 > > --Anu > > > > > > On 12/10/18, 6:38 PM, "Vinayakumar B" > wrote: > > > > +1 > > > > -Vinay > > > > On Mon, 10 Dec 2018, 1:22 pm Elek, Marton wrote: > > > > > > > > Thanks Akira, > > > > > > +1 (non-binding) > > > > > > I think it's better to do it now at a planned date. > > > > > > If I understood well the only bigger task here is to update > all the > > > jenkins jobs. (I am happy to help/contribute what I can do) > > > > > > > > > Marton > > > > > > On 12/8/18 6:25 AM, Akira Ajisaka wrote: > > > > Hi all, > > > > > > > > Apache Hadoop git repository is in git-wip-us server and it > will be > > > > decommissioned. > > > > If there are no objection, I'll file a JIRA ticket with > INFRA to > > > > migrate to https://gitbox.apache.org/ and update > documentation. > > > > > > > > According to ASF infra team, the timeframe is as follows: > > > > > > > >> - December 9th 2018 -> January 9th 2019: Voluntary > (coordinated) > > > relocation > > > >> - January 9th -> February 6th: Mandated (coordinated) > relocation > > > >> - February 7th: All remaining repositories are mass > migrated. > > > >> This timeline may change to accommodate various scenarios. > > > > > > > > If we got consensus by January 9th, I can file a ticket with > INFRA > > and > > > > migrate it. > > > > Even if we cannot got consensus, the repository will be > migrated by > > > > February 7th. > > > > > > > > Regards, > > > > Akira > > > > > > > > > > - > > > > To unsubscribe, e-mail: > yarn-dev-unsubscr...@hadoop.apache.org > > > > For additional commands, e-mail: > yarn-dev-h...@hadoop.apache.org > > > > > > > > > > > - > > > To unsubscribe, e-mail: > common-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: > common-dev-h...@hadoop.apache.org > > > > > > > > > > > > > > - > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > > -- > Weiwei Yang > > > > - > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/ [Dec 10, 2018 3:55:24 AM] (wwei) YARN-9009. Fix flaky test TestEntityGroupFSTimelineStore.testCleanLogs. [Dec 10, 2018 9:05:38 AM] (elek) HDDS-879. MultipartUpload: Add InitiateMultipartUpload in ozone. [Dec 10, 2018 7:06:50 PM] (haibochen) YARN-8738. FairScheduler should not parse negative maxResources or [Dec 10, 2018 7:12:54 PM] (haibochen) YARN-9087. Improve logging for initialization of Resource plugins. [Dec 10, 2018 9:03:08 PM] (mackrorysd) HDFS-14101. Fixing underflow error in test. Contributed by Zsolt [Dec 10, 2018 9:03:08 PM] (mackrorysd) HADOOP-15428. s3guard bucket-info will create s3guard table if FS is set -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.registry.secure.TestSecureLogins hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-compile-javac-root.txt [336K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-patch-pylint.txt [40K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/whitespace-eol.txt [9.3M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/whitespace-tabs.txt [1.1M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-hdds_client.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-hdds_tools.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_client.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_common.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/branch-findbugs-hadoop-ozone_tools.txt [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/diff-javadoc-javadoc-root.txt [752K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/patch-unit-hadoop-common-project_hadoop-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/984/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [324K]
[jira] [Created] (HADOOP-15997) KMS client uses wrong UGI after HADOOP-14445
Wei-Chiu Chuang created HADOOP-15997: Summary: KMS client uses wrong UGI after HADOOP-14445 Key: HADOOP-15997 URL: https://issues.apache.org/jira/browse/HADOOP-15997 Project: Hadoop Common Issue Type: Bug Environment: Hadoop 3.0.x, Kerberized, HDFS at-rest encryption, multiple KMS Reporter: Wei-Chiu Chuang After HADOOP-14445, KMS client always authenticates itself using the credentials from login user, rather than current user. {noformat} 2018-12-07 15:58:30,663 DEBUG [main] org.apache.hadoop.crypto.key.kms.KMSClientProvider: Using loginUser when Kerberos is enabled but the actual user does not have either KMS Delegation Token or Kerberos Credentials {noformat} The log message is printed because {{KMSClientProvider#containsKmsDt()}} is null when it definitely has the kms delegation token. In fact, {{KMSClientProvider#containsKmsDt()}} should select delegation token using {{clientTokenProvider.selectDelegationToken(creds)}} rather than checking if its dtService is in the user credentials. This is done correctly in {{KMSClientProvider#createAuthenticatedURL}} though. We found this bug when it broke Cloudera's Backup and Disaster Recovery tool. [~daryn] [~xiaochen] mind taking a look? HADOOP-14445 is a huge patch but it is almost perfect except for this bug. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [DISCUSS] Move to gitbox
+1 -Mukul On 12/11/18, 9:56 AM, "Weiwei Yang" wrote: +1 On Tue, Dec 11, 2018 at 10:51 AM Anu Engineer wrote: > +1 > --Anu > > > On 12/10/18, 6:38 PM, "Vinayakumar B" wrote: > > +1 > > -Vinay > > On Mon, 10 Dec 2018, 1:22 pm Elek, Marton > > > > Thanks Akira, > > > > +1 (non-binding) > > > > I think it's better to do it now at a planned date. > > > > If I understood well the only bigger task here is to update all the > > jenkins jobs. (I am happy to help/contribute what I can do) > > > > > > Marton > > > > On 12/8/18 6:25 AM, Akira Ajisaka wrote: > > > Hi all, > > > > > > Apache Hadoop git repository is in git-wip-us server and it will be > > > decommissioned. > > > If there are no objection, I'll file a JIRA ticket with INFRA to > > > migrate to https://gitbox.apache.org/ and update documentation. > > > > > > According to ASF infra team, the timeframe is as follows: > > > > > >> - December 9th 2018 -> January 9th 2019: Voluntary (coordinated) > > relocation > > >> - January 9th -> February 6th: Mandated (coordinated) relocation > > >> - February 7th: All remaining repositories are mass migrated. > > >> This timeline may change to accommodate various scenarios. > > > > > > If we got consensus by January 9th, I can file a ticket with INFRA > and > > > migrate it. > > > Even if we cannot got consensus, the repository will be migrated by > > > February 7th. > > > > > > Regards, > > > Akira > > > > > > > - > > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > > > > - > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > > > - > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > -- Weiwei Yang - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org