[jira] [Resolved] (HADOOP-16913) ABFS: Support for OAuth v2.0 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-16913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H resolved HADOOP-16913. --- Resolution: Fixed > ABFS: Support for OAuth v2.0 endpoints > --- > > Key: HADOOP-16913 > URL: https://issues.apache.org/jira/browse/HADOOP-16913 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Major > > Driver should upport v2.0 auth endpoints -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16659) ABFS: add missing docs for configuration
[ https://issues.apache.org/jira/browse/HADOOP-16659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H resolved HADOOP-16659. --- Resolution: Fixed > ABFS: add missing docs for configuration > > > Key: HADOOP-16659 > URL: https://issues.apache.org/jira/browse/HADOOP-16659 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.2 >Reporter: Da Zhou >Assignee: Bilahari T H >Priority: Major > > double-check the docs for ABFS and WASB configurations and add the missing > ones. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani resolved HADOOP-16818. - Release Note: it was decided to drop the usage of feature/API in the driver. (Combined Calls). There is a separate JIRA for support of appendblob. Resolution: Won't Fix > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Assignee: Ishani >Priority: Minor > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17087) Add EC flag to stat commands
Hongbing Wang created HADOOP-17087: -- Summary: Add EC flag to stat commands Key: HADOOP-17087 URL: https://issues.apache.org/jira/browse/HADOOP-17087 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Hongbing Wang We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can do but shows too much information. Neither {{du}} nor {{ls}} can accurately judge the ec file. So I added ec flag to stat cli. old result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt regular file $ hadoop fs -stat "%F" /user/rep/rep.txt regular file {code} new result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt erasure coding file $ hadoop fs -stat "%F" /user/rep/rep.txt replica file {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.1.4 (RC1)
Thanks for looking into this Akira, Kihwal! I noted that it is a hard to create situation described in HDFS-14941. The issue created by HDFS-14941 would be even harder to fix in HDFS-15421, test it, prove that it's stable, etc.. That's why I will do a revert of HDFS-14941 and create an RC3. * I withdraw this vote now for RC2 because of that blocker issue (HDFS-15421). I will create an RC3 with HDFS-14941 reverted. * Regards, Gabor On Tue, Jun 23, 2020 at 4:59 PM Kihwal Lee wrote: > > Gabor, > If you want to release asap, you can simply revert HDFS-14941 in the > release branch for now. It is causing the issue and was committed after > 3.1.3. This causes failure of the automated upgrade process and namenode > memory leak. > > Kihwal > > On Tue, Jun 23, 2020 at 8:47 AM Akira Ajisaka wrote: > > > Hi Gabor, > > > > Thank you for your work! > > > > Kihwal reported IBR leak in standby NameNode: > > https://issues.apache.org/jira/browse/HDFS-15421. > > I think this is a blocker and this affects 3.1.4-RC1. Would you check this? > > > > Best regards, > > Akira > > > > On Mon, Jun 22, 2020 at 10:26 PM Gabor Bota > .invalid> > > wrote: > > > > > Hi folks, > > > > > > I have put together a release candidate (RC1) for Hadoop 3.1.4. > > > > > > The RC is available at: > > http://people.apache.org/~gabota/hadoop-3.1.4-RC1/ > > > The RC tag in git is here: > > > https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC1 > > > The maven artifacts are staged at > > > https://repository.apache.org/content/repositories/orgapachehadoop-1267/ > > > > > > You can find my public key at: > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C > > > > > > Please try the release and vote. The vote will run for 5 weekdays, > > > until June 30. 2020. 23:00 CET. > > > > > > Thanks, > > > Gabor > > > > > > - > > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > > > > > - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.1.4 (RC1)
Hi Gabor, As you are going ahead with another RC, Please include : HDFS-15323 as well if possible. https://issues.apache.org/jira/browse/HDFS-15323 Remember tagging you there but at that time RC0 was up, try if that could make into the release as well. Thanx!!! -Ayush On Wed, 24 Jun 2020 at 16:16, Gabor Bota wrote: > Thanks for looking into this Akira, Kihwal! > > > I noted that it is a hard to create situation described in HDFS-14941. > The issue created by HDFS-14941 would be even harder to fix in > HDFS-15421, test it, prove that it's stable, etc.. > That's why I will do a revert of HDFS-14941 and create an RC3. > > > > * I withdraw this vote now for RC2 because of that blocker issue > (HDFS-15421). I will create an RC3 with HDFS-14941 reverted. * > > Regards, > Gabor > > On Tue, Jun 23, 2020 at 4:59 PM Kihwal Lee > wrote: > > > > Gabor, > > If you want to release asap, you can simply revert HDFS-14941 in the > > release branch for now. It is causing the issue and was committed after > > 3.1.3. This causes failure of the automated upgrade process and namenode > > memory leak. > > > > Kihwal > > > > On Tue, Jun 23, 2020 at 8:47 AM Akira Ajisaka > wrote: > > > > > Hi Gabor, > > > > > > Thank you for your work! > > > > > > Kihwal reported IBR leak in standby NameNode: > > > https://issues.apache.org/jira/browse/HDFS-15421. > > > I think this is a blocker and this affects 3.1.4-RC1. Would you check > this? > > > > > > Best regards, > > > Akira > > > > > > On Mon, Jun 22, 2020 at 10:26 PM Gabor Bota > > .invalid> > > > wrote: > > > > > > > Hi folks, > > > > > > > > I have put together a release candidate (RC1) for Hadoop 3.1.4. > > > > > > > > The RC is available at: > > > http://people.apache.org/~gabota/hadoop-3.1.4-RC1/ > > > > The RC tag in git is here: > > > > https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC1 > > > > The maven artifacts are staged at > > > > > https://repository.apache.org/content/repositories/orgapachehadoop-1267/ > > > > > > > > You can find my public key at: > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > and > http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C > > > > > > > > Please try the release and vote. The vote will run for 5 weekdays, > > > > until June 30. 2020. 23:00 CET. > > > > > > > > Thanks, > > > > Gabor > > > > > > > > - > > > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > > > > > > > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
[jira] [Created] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
Yushi Hayasaka created HADOOP-17088: --- Summary: Failed to load Xinclude files with relative path in case of loading conf via URI Key: HADOOP-17088 URL: https://issues.apache.org/jira/browse/HADOOP-17088 Project: Hadoop Common Issue Type: Bug Reporter: Yushi Hayasaka When we create a configuration file, which load a external XML file with relative path, and try to load it with calling `Configuration.addResource(URI)`, we got an error, which failed to load a external XML, after [https://issues.apache.org/jira/browse/HADOOP-14216] is merged. {noformat} Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) at com.company.test.Main.main(Main.java:29) Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 4 more {noformat} The cause is that the URI is passed as string to java.io.File constructor and File does not support the file URI, so my suggestion is trying to convert from string to URI at first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17054) ABFS: Fix idempotency test failures when SharedKey is set as AuthType
[ https://issues.apache.org/jira/browse/HADOOP-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt resolved HADOOP-17054. -- Resolution: Fixed Accidentally reactivated HADOOP-17015 but meant to reactivate HADOOP-17054. Please ignore previous comment. > ABFS: Fix idempotency test failures when SharedKey is set as AuthType > - > > Key: HADOOP-17054 > URL: https://issues.apache.org/jira/browse/HADOOP-17054 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Idempotency related tests added as part of > https://issues.apache.org/jira/browse/HADOOP-17015 > create a test AbfsClient instance. This mock instance wrongly accepts valid > sharedKey and oauth token provider instance. This leads to test failures with > exceptions: > [ERROR] > testRenameRetryFailureAsHTTP404(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRename) > Time elapsed: 9.133 s <<< ERROR! > Invalid auth type: SharedKey is being used, expecting OAuth > at > org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:643) > This Jira is to fix these tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-17015) ABFS: Make PUT and POST operations idempotent
[ https://issues.apache.org/jira/browse/HADOOP-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt reopened HADOOP-17015: -- We should revisit PR 2021 and try to find a better solution for rename. Users expect Rename to be atomic. The service implementation is atomic, but we have this client-side idempotency issue. This fix relies on time and assumes that if the destination was recently updated while we are executing a retry policy, that we succeeded. This may not be the case. For example, users may rely on rename (with overwrite = false) of a file to synchronize or act like a distributed lock, so who ever renames successfully acquires the lock. With the fix in PR 2021, more than one caller could acquire this lock at the same time. Instead, I think we could allow the client to provide a UUID for the rename operation and persist this UUID in the metadata of the destination blob upon successful completion of a rename, then if we get into this idempotency issue and the client gets a 404 source does not exist, we can check the destination blob's metadata to see if the UUID is a match. > ABFS: Make PUT and POST operations idempotent > - > > Key: HADOOP-17015 > URL: https://issues.apache.org/jira/browse/HADOOP-17015 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Currently when a PUT or POST operation timeouts and the server has already > successfully executed the operation, there is no check in driver to see if > the operation did succeed or not and just retries the same operation again. > This can cause driver to through invalid user errors. > > Sample scenario: > # Rename request times out. Though server has successfully executed the > operation. > # Driver retries rename and get source not found error. > In the scenario, driver needs to check if rename is being retried and success > if source if not found, but destination is present. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17089) WASB: Update azure-storage-java SDK
Thomas Marqardt created HADOOP-17089: Summary: WASB: Update azure-storage-java SDK Key: HADOOP-17089 URL: https://issues.apache.org/jira/browse/HADOOP-17089 Project: Hadoop Common Issue Type: Bug Components: fs/azure Affects Versions: 3.2.0, 3.1.0, 3.0.0, 2.9.0, 2.8.0, 2.7.0 Reporter: Thomas Marqardt Assignee: Thomas Marqardt WASB depends on the Azure Storage Java SDK. There is a concurrency bug in the Azure Storage Java SDK that can cause the results of a list blobs operation to appear empty. This causes the Filesystem listStatus and similar APIs to return empty results. This has been seen in Spark work loads when jobs use more than one executor core. See [https://github.com/Azure/azure-storage-java/pull/546] for details on the bug in the Azure Storage SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17050) S3A to support additional token issuers
[ https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17050. - Resolution: Fixed > S3A to support additional token issuers > --- > > Key: HADOOP-17050 > URL: https://issues.apache.org/jira/browse/HADOOP-17050 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Gabor Bota >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.3.1 > > > In > {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} > the {{createDelegationToken}} should return a list of tokens. > With this functionality, the {{AbstractDelegationTokenBinding}} can get two > different tokens at the same time. > {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to > retrieve secrets and lookup delegation tokens (use the public API for > secretmanager in hadoop) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/727/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint jshint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml findbugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 515] findbugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] findbugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 383] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 389] Possible bad parsing of shift operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 398] org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory) unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl At DefaultMetricsFactory.java:[line 49] org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) unconditionally sets the field miniClusterMode At DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 92] Useless object stored in variable seqOs of method org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier, AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At ZKDelegationTokenSecretManager.java:seqOs of method org.apache.
Re: [VOTE] Release Apache Hadoop 3.1.4 (RC1)
Correction: I meant "withdraw this vote now for RC1" and "create an RC2". Ayush, I'll add it if it applies without conflict. Regards, Gabor On Wed, Jun 24, 2020 at 1:01 PM Ayush Saxena wrote: > > Hi Gabor, > As you are going ahead with another RC, > Please include : HDFS-15323 as well if possible. > > https://issues.apache.org/jira/browse/HDFS-15323 > > Remember tagging you there but at that time RC0 was up, try if that could > make into the release as well. > > Thanx!!! > -Ayush > > On Wed, 24 Jun 2020 at 16:16, Gabor Bota > wrote: > > > Thanks for looking into this Akira, Kihwal! > > > > > > I noted that it is a hard to create situation described in HDFS-14941. > > The issue created by HDFS-14941 would be even harder to fix in > > HDFS-15421, test it, prove that it's stable, etc.. > > That's why I will do a revert of HDFS-14941 and create an RC3. > > > > > > > > * I withdraw this vote now for RC2 because of that blocker issue > > (HDFS-15421). I will create an RC3 with HDFS-14941 reverted. * > > > > Regards, > > Gabor > > > > On Tue, Jun 23, 2020 at 4:59 PM Kihwal Lee > > wrote: > > > > > > Gabor, > > > If you want to release asap, you can simply revert HDFS-14941 in the > > > release branch for now. It is causing the issue and was committed after > > > 3.1.3. This causes failure of the automated upgrade process and namenode > > > memory leak. > > > > > > Kihwal > > > > > > On Tue, Jun 23, 2020 at 8:47 AM Akira Ajisaka > > wrote: > > > > > > > Hi Gabor, > > > > > > > > Thank you for your work! > > > > > > > > Kihwal reported IBR leak in standby NameNode: > > > > https://issues.apache.org/jira/browse/HDFS-15421. > > > > I think this is a blocker and this affects 3.1.4-RC1. Would you check > > this? > > > > > > > > Best regards, > > > > Akira > > > > > > > > On Mon, Jun 22, 2020 at 10:26 PM Gabor Bota > > > .invalid> > > > > wrote: > > > > > > > > > Hi folks, > > > > > > > > > > I have put together a release candidate (RC1) for Hadoop 3.1.4. > > > > > > > > > > The RC is available at: > > > > http://people.apache.org/~gabota/hadoop-3.1.4-RC1/ > > > > > The RC tag in git is here: > > > > > https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC1 > > > > > The maven artifacts are staged at > > > > > > > https://repository.apache.org/content/repositories/orgapachehadoop-1267/ > > > > > > > > > > You can find my public key at: > > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > > and > > http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C > > > > > > > > > > Please try the release and vote. The vote will run for 5 weekdays, > > > > > until June 30. 2020. 23:00 CET. > > > > > > > > > > Thanks, > > > > > Gabor > > > > > > > > > > - > > > > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org > > > > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org > > > > > > > > > > > > > > > > > > - > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17015) ABFS: Make PUT and POST operations idempotent
[ https://issues.apache.org/jira/browse/HADOOP-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt resolved HADOOP-17015. -- Resolution: Fixed Sneha and I discussed this. The common Hadoop scenario is a case where you have one or more tasks, each operating on different source files, all attempting to rename to a common destination. In this scenario, the fix in PR 2021 is correct. There are scenarios where PR 2021 will lead to incorrect results, but they seem to be very contrived and unlikely in Hadoop. A work item will be opened to investigate the need to improve this on the server-side, for example by allowing an operation-id to be passed to the rename operation and persisted in the destination metadata, but for now we have this fix to the driver on the client-side. > ABFS: Make PUT and POST operations idempotent > - > > Key: HADOOP-17015 > URL: https://issues.apache.org/jira/browse/HADOOP-17015 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Currently when a PUT or POST operation timeouts and the server has already > successfully executed the operation, there is no check in driver to see if > the operation did succeed or not and just retries the same operation again. > This can cause driver to through invalid user errors. > > Sample scenario: > # Rename request times out. Though server has successfully executed the > operation. > # Driver retries rename and get source not found error. > In the scenario, driver needs to check if rename is being retried and success > if source if not found, but destination is present. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/183/ [Jun 23, 2020 8:42:25 AM] (noreply) HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS. Contributed by Uma Maheswara Rao G. [Jun 23, 2020 8:59:51 AM] (Xiaoqiao He) HADOOP-17068. Client fails forever when namenode ipaddr changed. Contributed by Sean Chow. [Jun 23, 2020 10:13:04 AM] (Szilard Nemeth) YARN-10316. FS-CS converter: convert maxAppsDefault, maxRunningApps settings. Contributed by Peter Bacsko [Jun 23, 2020 8:12:29 PM] (noreply) HDFS-15383. RBF: Add support for router delegation token without watch (#2047) [Error replacing 'FILE' - Workspace is not accessible] - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours
Akira Ajisaka created HADOOP-17090: -- Summary: Increase precommit job timeout from 5 hours to 20 hours Key: HADOOP-17090 URL: https://issues.apache.org/jira/browse/HADOOP-17090 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Akira Ajisaka Now we frequently increase the timeout for testing and undo the change before committing. * https://github.com/apache/hadoop/pull/2026 * https://github.com/apache/hadoop/pull/2051 * https://github.com/apache/hadoop/pull/2012 * and more... I'd like to increase the timeout by default to reduce the work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
Sneha Vijayarajan created HADOOP-17092: -- Summary: ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds Key: HADOOP-17092 URL: https://issues.apache.org/jira/browse/HADOOP-17092 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Reporter: Sneha Vijayarajan Fix For: 3.4.0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17093) ABFS: GetAccessToken unrecoverable failures are being retried
Sneha Vijayarajan created HADOOP-17093: -- Summary: ABFS: GetAccessToken unrecoverable failures are being retried Key: HADOOP-17093 URL: https://issues.apache.org/jira/browse/HADOOP-17093 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Reporter: Sneha Vijayarajan Fix For: 3.4.0 When there is an invalid config set, call to fetch token fails with exception: throw new UnexpectedResponseException(httpResponseCode, requestId, operation + " Unexpected response." + " Check configuration, URLs and proxy settings." + " proxies=" + proxies, authEndpoint, responseContentType, responseBody); } Issue here is that UnexpectedResponseException is not recognized as irrecoverable state and ends up being retried. This needs to be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17089) WASB: Update azure-storage-java SDK
[ https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt resolved HADOOP-17089. -- Fix Version/s: 3.3.1 Release Note: Azure WASB bug fix that can cause list results to appear empty. Resolution: Fixed trunk: commit 4b5b54c73f2fd9146237087a59453e2b5d70f9ed Author: Thomas Marquardt Date: Wed Jun 24 18:37:25 2020 + branch-3.3 commit ee192c48265fe7dcf23bc33f6a6698bb41477ca9 Author: Thomas Marquardt Date: Wed Jun 24 18:37:25 2020 + > WASB: Update azure-storage-java SDK > --- > > Key: HADOOP-17089 > URL: https://issues.apache.org/jira/browse/HADOOP-17089 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0 >Reporter: Thomas Marqardt >Assignee: Thomas Marqardt >Priority: Major > Fix For: 3.3.1 > > > WASB depends on the Azure Storage Java SDK. There is a concurrency bug in > the Azure Storage Java SDK that can cause the results of a list blobs > operation to appear empty. This causes the Filesystem listStatus and similar > APIs to return empty results. This has been seen in Spark work loads when > jobs use more than one executor core. > See [https://github.com/Azure/azure-storage-java/pull/546] for details on the > bug in the Azure Storage SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org